00:00:00.001 Started by upstream project "autotest-per-patch" build number 126121 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.088 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.089 The recommended git tool is: git 00:00:00.089 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.114 Fetching changes from the remote Git repository 00:00:00.117 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.154 Using shallow fetch with depth 1 00:00:00.154 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.154 > git --version # timeout=10 00:00:00.188 > git --version # 'git version 2.39.2' 00:00:00.188 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.224 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.224 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.428 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.440 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.451 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.451 > git config core.sparsecheckout # timeout=10 00:00:04.462 > git read-tree -mu HEAD # timeout=10 00:00:04.477 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.495 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.496 > git rev-list --no-walk 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=10 00:00:04.610 [Pipeline] Start of Pipeline 00:00:04.624 [Pipeline] library 00:00:04.626 Loading library shm_lib@master 00:00:04.626 Library shm_lib@master is cached. Copying from home. 00:00:04.643 [Pipeline] node 00:00:04.652 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.653 [Pipeline] { 00:00:04.665 [Pipeline] catchError 00:00:04.666 [Pipeline] { 00:00:04.678 [Pipeline] wrap 00:00:04.685 [Pipeline] { 00:00:04.692 [Pipeline] stage 00:00:04.694 [Pipeline] { (Prologue) 00:00:04.865 [Pipeline] sh 00:00:05.150 + logger -p user.info -t JENKINS-CI 00:00:05.170 [Pipeline] echo 00:00:05.172 Node: WFP8 00:00:05.180 [Pipeline] sh 00:00:05.477 [Pipeline] setCustomBuildProperty 00:00:05.488 [Pipeline] echo 00:00:05.489 Cleanup processes 00:00:05.494 [Pipeline] sh 00:00:05.775 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.775 2255509 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.786 [Pipeline] sh 00:00:06.066 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.066 ++ grep -v 'sudo pgrep' 00:00:06.066 ++ awk '{print $1}' 00:00:06.066 + sudo kill -9 00:00:06.066 + true 00:00:06.078 [Pipeline] cleanWs 00:00:06.086 [WS-CLEANUP] Deleting project workspace... 00:00:06.086 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.092 [WS-CLEANUP] done 00:00:06.095 [Pipeline] setCustomBuildProperty 00:00:06.106 [Pipeline] sh 00:00:06.382 + sudo git config --global --replace-all safe.directory '*' 00:00:06.447 [Pipeline] httpRequest 00:00:06.486 [Pipeline] echo 00:00:06.487 Sorcerer 10.211.164.101 is alive 00:00:06.492 [Pipeline] httpRequest 00:00:06.496 HttpMethod: GET 00:00:06.496 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.497 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.518 Response Code: HTTP/1.1 200 OK 00:00:06.519 Success: Status code 200 is in the accepted range: 200,404 00:00:06.519 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:20.422 [Pipeline] sh 00:00:20.713 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:20.734 [Pipeline] httpRequest 00:00:20.775 [Pipeline] echo 00:00:20.777 Sorcerer 10.211.164.101 is alive 00:00:20.787 [Pipeline] httpRequest 00:00:20.791 HttpMethod: GET 00:00:20.792 URL: http://10.211.164.101/packages/spdk_192cfc3737ce6cd1b406227ec8afa3e87d7e0a12.tar.gz 00:00:20.793 Sending request to url: http://10.211.164.101/packages/spdk_192cfc3737ce6cd1b406227ec8afa3e87d7e0a12.tar.gz 00:00:20.822 Response Code: HTTP/1.1 200 OK 00:00:20.823 Success: Status code 200 is in the accepted range: 200,404 00:00:20.824 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_192cfc3737ce6cd1b406227ec8afa3e87d7e0a12.tar.gz 00:01:50.171 [Pipeline] sh 00:01:50.456 + tar --no-same-owner -xf spdk_192cfc3737ce6cd1b406227ec8afa3e87d7e0a12.tar.gz 00:01:53.001 [Pipeline] sh 00:01:53.281 + git -C spdk log --oneline -n5 00:01:53.281 192cfc373 test/common/autotest_common: managing idxd drivers setup 00:01:53.281 e118fc0cd test/setup: add configuration script for dsa devices 00:01:53.281 719d03c6a sock/uring: only register net impl if supported 00:01:53.281 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:53.281 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:53.292 [Pipeline] } 00:01:53.304 [Pipeline] // stage 00:01:53.311 [Pipeline] stage 00:01:53.313 [Pipeline] { (Prepare) 00:01:53.327 [Pipeline] writeFile 00:01:53.340 [Pipeline] sh 00:01:53.620 + logger -p user.info -t JENKINS-CI 00:01:53.631 [Pipeline] sh 00:01:53.911 + logger -p user.info -t JENKINS-CI 00:01:53.924 [Pipeline] sh 00:01:54.209 + cat autorun-spdk.conf 00:01:54.209 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:54.209 SPDK_TEST_NVMF=1 00:01:54.209 SPDK_TEST_NVME_CLI=1 00:01:54.209 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:54.209 SPDK_TEST_NVMF_NICS=e810 00:01:54.209 SPDK_TEST_VFIOUSER=1 00:01:54.209 SPDK_RUN_UBSAN=1 00:01:54.209 NET_TYPE=phy 00:01:54.216 RUN_NIGHTLY=0 00:01:54.220 [Pipeline] readFile 00:01:54.247 [Pipeline] withEnv 00:01:54.249 [Pipeline] { 00:01:54.260 [Pipeline] sh 00:01:54.541 + set -ex 00:01:54.542 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:54.542 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:54.542 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:54.542 ++ SPDK_TEST_NVMF=1 00:01:54.542 ++ SPDK_TEST_NVME_CLI=1 00:01:54.542 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:54.542 ++ SPDK_TEST_NVMF_NICS=e810 00:01:54.542 ++ SPDK_TEST_VFIOUSER=1 00:01:54.542 ++ SPDK_RUN_UBSAN=1 00:01:54.542 ++ NET_TYPE=phy 00:01:54.542 ++ RUN_NIGHTLY=0 00:01:54.542 + case $SPDK_TEST_NVMF_NICS in 00:01:54.542 + DRIVERS=ice 00:01:54.542 + [[ tcp == \r\d\m\a ]] 00:01:54.542 + [[ -n ice ]] 00:01:54.542 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:54.542 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:54.542 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:54.542 rmmod: ERROR: Module irdma is not currently loaded 00:01:54.542 rmmod: ERROR: Module i40iw is not currently loaded 00:01:54.542 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:54.542 + true 00:01:54.542 + for D in $DRIVERS 00:01:54.542 + sudo modprobe ice 00:01:54.542 + exit 0 00:01:54.551 [Pipeline] } 00:01:54.570 [Pipeline] // withEnv 00:01:54.575 [Pipeline] } 00:01:54.595 [Pipeline] // stage 00:01:54.605 [Pipeline] catchError 00:01:54.606 [Pipeline] { 00:01:54.619 [Pipeline] timeout 00:01:54.619 Timeout set to expire in 50 min 00:01:54.620 [Pipeline] { 00:01:54.631 [Pipeline] stage 00:01:54.632 [Pipeline] { (Tests) 00:01:54.643 [Pipeline] sh 00:01:54.924 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:54.924 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:54.924 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:54.924 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:54.924 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.924 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:54.925 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:54.925 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:54.925 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:54.925 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:54.925 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:54.925 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:54.925 + source /etc/os-release 00:01:54.925 ++ NAME='Fedora Linux' 00:01:54.925 ++ VERSION='38 (Cloud Edition)' 00:01:54.925 ++ ID=fedora 00:01:54.925 ++ VERSION_ID=38 00:01:54.925 ++ VERSION_CODENAME= 00:01:54.925 ++ PLATFORM_ID=platform:f38 00:01:54.925 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:54.925 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:54.925 ++ LOGO=fedora-logo-icon 00:01:54.925 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:54.925 ++ HOME_URL=https://fedoraproject.org/ 00:01:54.925 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:54.925 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:54.925 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:54.925 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:54.925 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:54.925 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:54.925 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:54.925 ++ SUPPORT_END=2024-05-14 00:01:54.925 ++ VARIANT='Cloud Edition' 00:01:54.925 ++ VARIANT_ID=cloud 00:01:54.925 + uname -a 00:01:54.925 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:54.925 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:56.834 Hugepages 00:01:56.834 node hugesize free / total 00:01:56.834 node0 1048576kB 0 / 0 00:01:56.834 node0 2048kB 0 / 0 00:01:57.094 node1 1048576kB 0 / 0 00:01:57.094 node1 2048kB 0 / 0 00:01:57.094 00:01:57.094 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:57.094 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:57.094 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:57.094 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:57.094 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:57.094 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:57.094 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:57.094 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:57.094 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:57.094 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:57.094 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:57.094 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:57.094 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:57.094 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:57.094 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:57.094 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:57.094 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:57.094 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:57.094 + rm -f /tmp/spdk-ld-path 00:01:57.094 + source autorun-spdk.conf 00:01:57.094 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.094 ++ SPDK_TEST_NVMF=1 00:01:57.094 ++ SPDK_TEST_NVME_CLI=1 00:01:57.094 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.094 ++ SPDK_TEST_NVMF_NICS=e810 00:01:57.094 ++ SPDK_TEST_VFIOUSER=1 00:01:57.094 ++ SPDK_RUN_UBSAN=1 00:01:57.094 ++ NET_TYPE=phy 00:01:57.094 ++ RUN_NIGHTLY=0 00:01:57.094 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:57.094 + [[ -n '' ]] 00:01:57.094 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.094 + for M in /var/spdk/build-*-manifest.txt 00:01:57.094 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:57.094 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.094 + for M in /var/spdk/build-*-manifest.txt 00:01:57.094 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:57.094 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:57.095 ++ uname 00:01:57.095 + [[ Linux == \L\i\n\u\x ]] 00:01:57.095 + sudo dmesg -T 00:01:57.095 + sudo dmesg --clear 00:01:57.355 + dmesg_pid=2256945 00:01:57.355 + [[ Fedora Linux == FreeBSD ]] 00:01:57.355 + sudo dmesg -Tw 00:01:57.355 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.355 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.355 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:57.355 + [[ -x /usr/src/fio-static/fio ]] 00:01:57.355 + export FIO_BIN=/usr/src/fio-static/fio 00:01:57.355 + FIO_BIN=/usr/src/fio-static/fio 00:01:57.355 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:57.355 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:57.355 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:57.355 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.355 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.355 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:57.355 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.355 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.355 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:57.355 Test configuration: 00:01:57.355 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.355 SPDK_TEST_NVMF=1 00:01:57.355 SPDK_TEST_NVME_CLI=1 00:01:57.355 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.355 SPDK_TEST_NVMF_NICS=e810 00:01:57.355 SPDK_TEST_VFIOUSER=1 00:01:57.355 SPDK_RUN_UBSAN=1 00:01:57.355 NET_TYPE=phy 00:01:57.355 RUN_NIGHTLY=0 14:06:49 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:57.355 14:06:49 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:57.355 14:06:49 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:57.355 14:06:49 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:57.355 14:06:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.355 14:06:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.355 14:06:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.355 14:06:49 -- paths/export.sh@5 -- $ export PATH 00:01:57.355 14:06:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.355 14:06:49 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:57.355 14:06:49 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:57.355 14:06:49 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720786009.XXXXXX 00:01:57.355 14:06:49 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720786009.DEjujC 00:01:57.355 14:06:49 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:57.355 14:06:49 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:57.355 14:06:49 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:57.355 14:06:49 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:57.355 14:06:49 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:57.355 14:06:49 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:57.355 14:06:49 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:57.355 14:06:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.355 14:06:49 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:57.355 14:06:49 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:57.355 14:06:49 -- pm/common@17 -- $ local monitor 00:01:57.355 14:06:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.355 14:06:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.355 14:06:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.355 14:06:49 -- pm/common@21 -- $ date +%s 00:01:57.355 14:06:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.355 14:06:49 -- pm/common@21 -- $ date +%s 00:01:57.355 14:06:49 -- pm/common@25 -- $ sleep 1 00:01:57.355 14:06:49 -- pm/common@21 -- $ date +%s 00:01:57.355 14:06:49 -- pm/common@21 -- $ date +%s 00:01:57.355 14:06:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720786009 00:01:57.355 14:06:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720786009 00:01:57.355 14:06:49 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720786009 00:01:57.355 14:06:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720786009 00:01:57.355 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720786009_collect-vmstat.pm.log 00:01:57.355 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720786009_collect-cpu-load.pm.log 00:01:57.355 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720786009_collect-cpu-temp.pm.log 00:01:57.355 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720786009_collect-bmc-pm.bmc.pm.log 00:01:58.351 14:06:50 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:58.351 14:06:50 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:58.351 14:06:50 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:58.351 14:06:50 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.351 14:06:50 -- spdk/autobuild.sh@16 -- $ date -u 00:01:58.351 Fri Jul 12 12:06:50 PM UTC 2024 00:01:58.351 14:06:50 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:58.351 v24.09-pre-204-g192cfc373 00:01:58.351 14:06:50 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:58.351 14:06:50 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:58.351 14:06:50 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:58.351 14:06:50 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:58.351 14:06:50 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:58.351 14:06:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.351 ************************************ 00:01:58.351 START TEST ubsan 00:01:58.351 ************************************ 00:01:58.351 14:06:50 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:58.351 using ubsan 00:01:58.351 00:01:58.351 real 0m0.000s 00:01:58.351 user 0m0.000s 00:01:58.351 sys 0m0.000s 00:01:58.351 14:06:50 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:58.351 14:06:50 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:58.351 ************************************ 00:01:58.351 END TEST ubsan 00:01:58.351 ************************************ 00:01:58.351 14:06:50 -- common/autotest_common.sh@1142 -- $ return 0 00:01:58.351 14:06:50 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:58.351 14:06:50 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:58.351 14:06:50 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:58.351 14:06:50 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:58.351 14:06:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:58.351 14:06:50 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:58.351 14:06:50 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:58.351 14:06:50 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:58.351 14:06:50 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:58.610 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:58.610 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:58.870 Using 'verbs' RDMA provider 00:02:12.024 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:22.006 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:22.006 Creating mk/config.mk...done. 00:02:22.006 Creating mk/cc.flags.mk...done. 00:02:22.006 Type 'make' to build. 00:02:22.006 14:07:13 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:02:22.006 14:07:13 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:22.006 14:07:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:22.006 14:07:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.006 ************************************ 00:02:22.006 START TEST make 00:02:22.006 ************************************ 00:02:22.006 14:07:13 make -- common/autotest_common.sh@1123 -- $ make -j96 00:02:22.570 make[1]: Nothing to be done for 'all'. 00:02:23.512 The Meson build system 00:02:23.512 Version: 1.3.1 00:02:23.512 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:23.512 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:23.512 Build type: native build 00:02:23.512 Project name: libvfio-user 00:02:23.512 Project version: 0.0.1 00:02:23.512 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:23.512 C linker for the host machine: cc ld.bfd 2.39-16 00:02:23.512 Host machine cpu family: x86_64 00:02:23.512 Host machine cpu: x86_64 00:02:23.512 Run-time dependency threads found: YES 00:02:23.512 Library dl found: YES 00:02:23.512 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:23.512 Run-time dependency json-c found: YES 0.17 00:02:23.512 Run-time dependency cmocka found: YES 1.1.7 00:02:23.512 Program pytest-3 found: NO 00:02:23.512 Program flake8 found: NO 00:02:23.512 Program misspell-fixer found: NO 00:02:23.512 Program restructuredtext-lint found: NO 00:02:23.512 Program valgrind found: YES (/usr/bin/valgrind) 00:02:23.512 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.512 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.512 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.513 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:23.513 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:23.513 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:23.513 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:23.513 Build targets in project: 8 00:02:23.513 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:23.513 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:23.513 00:02:23.513 libvfio-user 0.0.1 00:02:23.513 00:02:23.513 User defined options 00:02:23.513 buildtype : debug 00:02:23.513 default_library: shared 00:02:23.513 libdir : /usr/local/lib 00:02:23.513 00:02:23.513 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:24.085 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:24.085 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:24.085 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:24.085 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:24.085 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:24.085 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:24.085 [6/37] Compiling C object samples/null.p/null.c.o 00:02:24.085 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:24.085 [8/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:24.085 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:24.085 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:24.085 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:24.085 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:24.085 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:24.085 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:24.085 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:24.085 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:24.085 [17/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:24.085 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:24.085 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:24.085 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:24.085 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:24.085 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:24.085 [23/37] Compiling C object samples/server.p/server.c.o 00:02:24.085 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:24.085 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:24.343 [26/37] Compiling C object samples/client.p/client.c.o 00:02:24.343 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:24.343 [28/37] Linking target samples/client 00:02:24.343 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:24.343 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:24.343 [31/37] Linking target test/unit_tests 00:02:24.343 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:24.343 [33/37] Linking target samples/gpio-pci-idio-16 00:02:24.343 [34/37] Linking target samples/server 00:02:24.343 [35/37] Linking target samples/shadow_ioeventfd_server 00:02:24.343 [36/37] Linking target samples/null 00:02:24.343 [37/37] Linking target samples/lspci 00:02:24.343 INFO: autodetecting backend as ninja 00:02:24.343 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:24.601 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:24.859 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:24.859 ninja: no work to do. 00:02:30.120 The Meson build system 00:02:30.120 Version: 1.3.1 00:02:30.120 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:30.120 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:30.120 Build type: native build 00:02:30.120 Program cat found: YES (/usr/bin/cat) 00:02:30.120 Project name: DPDK 00:02:30.120 Project version: 24.03.0 00:02:30.120 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:30.120 C linker for the host machine: cc ld.bfd 2.39-16 00:02:30.120 Host machine cpu family: x86_64 00:02:30.120 Host machine cpu: x86_64 00:02:30.120 Message: ## Building in Developer Mode ## 00:02:30.120 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:30.120 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:30.120 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:30.120 Program python3 found: YES (/usr/bin/python3) 00:02:30.120 Program cat found: YES (/usr/bin/cat) 00:02:30.120 Compiler for C supports arguments -march=native: YES 00:02:30.120 Checking for size of "void *" : 8 00:02:30.120 Checking for size of "void *" : 8 (cached) 00:02:30.120 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:30.120 Library m found: YES 00:02:30.120 Library numa found: YES 00:02:30.120 Has header "numaif.h" : YES 00:02:30.120 Library fdt found: NO 00:02:30.120 Library execinfo found: NO 00:02:30.120 Has header "execinfo.h" : YES 00:02:30.120 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:30.120 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:30.120 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:30.120 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:30.120 Run-time dependency openssl found: YES 3.0.9 00:02:30.120 Run-time dependency libpcap found: YES 1.10.4 00:02:30.120 Has header "pcap.h" with dependency libpcap: YES 00:02:30.120 Compiler for C supports arguments -Wcast-qual: YES 00:02:30.120 Compiler for C supports arguments -Wdeprecated: YES 00:02:30.120 Compiler for C supports arguments -Wformat: YES 00:02:30.120 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:30.120 Compiler for C supports arguments -Wformat-security: NO 00:02:30.120 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:30.120 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:30.120 Compiler for C supports arguments -Wnested-externs: YES 00:02:30.120 Compiler for C supports arguments -Wold-style-definition: YES 00:02:30.120 Compiler for C supports arguments -Wpointer-arith: YES 00:02:30.120 Compiler for C supports arguments -Wsign-compare: YES 00:02:30.120 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:30.120 Compiler for C supports arguments -Wundef: YES 00:02:30.120 Compiler for C supports arguments -Wwrite-strings: YES 00:02:30.120 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:30.120 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:30.120 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:30.120 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:30.120 Program objdump found: YES (/usr/bin/objdump) 00:02:30.120 Compiler for C supports arguments -mavx512f: YES 00:02:30.120 Checking if "AVX512 checking" compiles: YES 00:02:30.120 Fetching value of define "__SSE4_2__" : 1 00:02:30.120 Fetching value of define "__AES__" : 1 00:02:30.120 Fetching value of define "__AVX__" : 1 00:02:30.120 Fetching value of define "__AVX2__" : 1 00:02:30.120 Fetching value of define "__AVX512BW__" : 1 00:02:30.120 Fetching value of define "__AVX512CD__" : 1 00:02:30.120 Fetching value of define "__AVX512DQ__" : 1 00:02:30.120 Fetching value of define "__AVX512F__" : 1 00:02:30.120 Fetching value of define "__AVX512VL__" : 1 00:02:30.120 Fetching value of define "__PCLMUL__" : 1 00:02:30.120 Fetching value of define "__RDRND__" : 1 00:02:30.120 Fetching value of define "__RDSEED__" : 1 00:02:30.120 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:30.120 Fetching value of define "__znver1__" : (undefined) 00:02:30.120 Fetching value of define "__znver2__" : (undefined) 00:02:30.120 Fetching value of define "__znver3__" : (undefined) 00:02:30.120 Fetching value of define "__znver4__" : (undefined) 00:02:30.120 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:30.120 Message: lib/log: Defining dependency "log" 00:02:30.120 Message: lib/kvargs: Defining dependency "kvargs" 00:02:30.120 Message: lib/telemetry: Defining dependency "telemetry" 00:02:30.120 Checking for function "getentropy" : NO 00:02:30.120 Message: lib/eal: Defining dependency "eal" 00:02:30.120 Message: lib/ring: Defining dependency "ring" 00:02:30.120 Message: lib/rcu: Defining dependency "rcu" 00:02:30.120 Message: lib/mempool: Defining dependency "mempool" 00:02:30.120 Message: lib/mbuf: Defining dependency "mbuf" 00:02:30.120 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:30.120 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:30.120 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:30.120 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:30.120 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:30.120 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:30.120 Compiler for C supports arguments -mpclmul: YES 00:02:30.120 Compiler for C supports arguments -maes: YES 00:02:30.120 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:30.120 Compiler for C supports arguments -mavx512bw: YES 00:02:30.120 Compiler for C supports arguments -mavx512dq: YES 00:02:30.120 Compiler for C supports arguments -mavx512vl: YES 00:02:30.120 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:30.120 Compiler for C supports arguments -mavx2: YES 00:02:30.120 Compiler for C supports arguments -mavx: YES 00:02:30.120 Message: lib/net: Defining dependency "net" 00:02:30.120 Message: lib/meter: Defining dependency "meter" 00:02:30.120 Message: lib/ethdev: Defining dependency "ethdev" 00:02:30.120 Message: lib/pci: Defining dependency "pci" 00:02:30.120 Message: lib/cmdline: Defining dependency "cmdline" 00:02:30.120 Message: lib/hash: Defining dependency "hash" 00:02:30.120 Message: lib/timer: Defining dependency "timer" 00:02:30.120 Message: lib/compressdev: Defining dependency "compressdev" 00:02:30.120 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:30.120 Message: lib/dmadev: Defining dependency "dmadev" 00:02:30.120 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:30.120 Message: lib/power: Defining dependency "power" 00:02:30.120 Message: lib/reorder: Defining dependency "reorder" 00:02:30.120 Message: lib/security: Defining dependency "security" 00:02:30.120 Has header "linux/userfaultfd.h" : YES 00:02:30.120 Has header "linux/vduse.h" : YES 00:02:30.120 Message: lib/vhost: Defining dependency "vhost" 00:02:30.120 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:30.120 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:30.120 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:30.120 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:30.120 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:30.120 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:30.120 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:30.120 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:30.120 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:30.120 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:30.120 Program doxygen found: YES (/usr/bin/doxygen) 00:02:30.120 Configuring doxy-api-html.conf using configuration 00:02:30.120 Configuring doxy-api-man.conf using configuration 00:02:30.120 Program mandb found: YES (/usr/bin/mandb) 00:02:30.120 Program sphinx-build found: NO 00:02:30.120 Configuring rte_build_config.h using configuration 00:02:30.120 Message: 00:02:30.120 ================= 00:02:30.120 Applications Enabled 00:02:30.120 ================= 00:02:30.120 00:02:30.120 apps: 00:02:30.120 00:02:30.120 00:02:30.120 Message: 00:02:30.120 ================= 00:02:30.120 Libraries Enabled 00:02:30.120 ================= 00:02:30.120 00:02:30.120 libs: 00:02:30.120 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:30.120 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:30.121 cryptodev, dmadev, power, reorder, security, vhost, 00:02:30.121 00:02:30.121 Message: 00:02:30.121 =============== 00:02:30.121 Drivers Enabled 00:02:30.121 =============== 00:02:30.121 00:02:30.121 common: 00:02:30.121 00:02:30.121 bus: 00:02:30.121 pci, vdev, 00:02:30.121 mempool: 00:02:30.121 ring, 00:02:30.121 dma: 00:02:30.121 00:02:30.121 net: 00:02:30.121 00:02:30.121 crypto: 00:02:30.121 00:02:30.121 compress: 00:02:30.121 00:02:30.121 vdpa: 00:02:30.121 00:02:30.121 00:02:30.121 Message: 00:02:30.121 ================= 00:02:30.121 Content Skipped 00:02:30.121 ================= 00:02:30.121 00:02:30.121 apps: 00:02:30.121 dumpcap: explicitly disabled via build config 00:02:30.121 graph: explicitly disabled via build config 00:02:30.121 pdump: explicitly disabled via build config 00:02:30.121 proc-info: explicitly disabled via build config 00:02:30.121 test-acl: explicitly disabled via build config 00:02:30.121 test-bbdev: explicitly disabled via build config 00:02:30.121 test-cmdline: explicitly disabled via build config 00:02:30.121 test-compress-perf: explicitly disabled via build config 00:02:30.121 test-crypto-perf: explicitly disabled via build config 00:02:30.121 test-dma-perf: explicitly disabled via build config 00:02:30.121 test-eventdev: explicitly disabled via build config 00:02:30.121 test-fib: explicitly disabled via build config 00:02:30.121 test-flow-perf: explicitly disabled via build config 00:02:30.121 test-gpudev: explicitly disabled via build config 00:02:30.121 test-mldev: explicitly disabled via build config 00:02:30.121 test-pipeline: explicitly disabled via build config 00:02:30.121 test-pmd: explicitly disabled via build config 00:02:30.121 test-regex: explicitly disabled via build config 00:02:30.121 test-sad: explicitly disabled via build config 00:02:30.121 test-security-perf: explicitly disabled via build config 00:02:30.121 00:02:30.121 libs: 00:02:30.121 argparse: explicitly disabled via build config 00:02:30.121 metrics: explicitly disabled via build config 00:02:30.121 acl: explicitly disabled via build config 00:02:30.121 bbdev: explicitly disabled via build config 00:02:30.121 bitratestats: explicitly disabled via build config 00:02:30.121 bpf: explicitly disabled via build config 00:02:30.121 cfgfile: explicitly disabled via build config 00:02:30.121 distributor: explicitly disabled via build config 00:02:30.121 efd: explicitly disabled via build config 00:02:30.121 eventdev: explicitly disabled via build config 00:02:30.121 dispatcher: explicitly disabled via build config 00:02:30.121 gpudev: explicitly disabled via build config 00:02:30.121 gro: explicitly disabled via build config 00:02:30.121 gso: explicitly disabled via build config 00:02:30.121 ip_frag: explicitly disabled via build config 00:02:30.121 jobstats: explicitly disabled via build config 00:02:30.121 latencystats: explicitly disabled via build config 00:02:30.121 lpm: explicitly disabled via build config 00:02:30.121 member: explicitly disabled via build config 00:02:30.121 pcapng: explicitly disabled via build config 00:02:30.121 rawdev: explicitly disabled via build config 00:02:30.121 regexdev: explicitly disabled via build config 00:02:30.121 mldev: explicitly disabled via build config 00:02:30.121 rib: explicitly disabled via build config 00:02:30.121 sched: explicitly disabled via build config 00:02:30.121 stack: explicitly disabled via build config 00:02:30.121 ipsec: explicitly disabled via build config 00:02:30.121 pdcp: explicitly disabled via build config 00:02:30.121 fib: explicitly disabled via build config 00:02:30.121 port: explicitly disabled via build config 00:02:30.121 pdump: explicitly disabled via build config 00:02:30.121 table: explicitly disabled via build config 00:02:30.121 pipeline: explicitly disabled via build config 00:02:30.121 graph: explicitly disabled via build config 00:02:30.121 node: explicitly disabled via build config 00:02:30.121 00:02:30.121 drivers: 00:02:30.121 common/cpt: not in enabled drivers build config 00:02:30.121 common/dpaax: not in enabled drivers build config 00:02:30.121 common/iavf: not in enabled drivers build config 00:02:30.121 common/idpf: not in enabled drivers build config 00:02:30.121 common/ionic: not in enabled drivers build config 00:02:30.121 common/mvep: not in enabled drivers build config 00:02:30.121 common/octeontx: not in enabled drivers build config 00:02:30.121 bus/auxiliary: not in enabled drivers build config 00:02:30.121 bus/cdx: not in enabled drivers build config 00:02:30.121 bus/dpaa: not in enabled drivers build config 00:02:30.121 bus/fslmc: not in enabled drivers build config 00:02:30.121 bus/ifpga: not in enabled drivers build config 00:02:30.121 bus/platform: not in enabled drivers build config 00:02:30.121 bus/uacce: not in enabled drivers build config 00:02:30.121 bus/vmbus: not in enabled drivers build config 00:02:30.121 common/cnxk: not in enabled drivers build config 00:02:30.121 common/mlx5: not in enabled drivers build config 00:02:30.121 common/nfp: not in enabled drivers build config 00:02:30.121 common/nitrox: not in enabled drivers build config 00:02:30.121 common/qat: not in enabled drivers build config 00:02:30.121 common/sfc_efx: not in enabled drivers build config 00:02:30.121 mempool/bucket: not in enabled drivers build config 00:02:30.121 mempool/cnxk: not in enabled drivers build config 00:02:30.121 mempool/dpaa: not in enabled drivers build config 00:02:30.121 mempool/dpaa2: not in enabled drivers build config 00:02:30.121 mempool/octeontx: not in enabled drivers build config 00:02:30.121 mempool/stack: not in enabled drivers build config 00:02:30.121 dma/cnxk: not in enabled drivers build config 00:02:30.121 dma/dpaa: not in enabled drivers build config 00:02:30.121 dma/dpaa2: not in enabled drivers build config 00:02:30.121 dma/hisilicon: not in enabled drivers build config 00:02:30.121 dma/idxd: not in enabled drivers build config 00:02:30.121 dma/ioat: not in enabled drivers build config 00:02:30.121 dma/skeleton: not in enabled drivers build config 00:02:30.121 net/af_packet: not in enabled drivers build config 00:02:30.121 net/af_xdp: not in enabled drivers build config 00:02:30.121 net/ark: not in enabled drivers build config 00:02:30.121 net/atlantic: not in enabled drivers build config 00:02:30.121 net/avp: not in enabled drivers build config 00:02:30.121 net/axgbe: not in enabled drivers build config 00:02:30.121 net/bnx2x: not in enabled drivers build config 00:02:30.121 net/bnxt: not in enabled drivers build config 00:02:30.121 net/bonding: not in enabled drivers build config 00:02:30.121 net/cnxk: not in enabled drivers build config 00:02:30.121 net/cpfl: not in enabled drivers build config 00:02:30.121 net/cxgbe: not in enabled drivers build config 00:02:30.121 net/dpaa: not in enabled drivers build config 00:02:30.121 net/dpaa2: not in enabled drivers build config 00:02:30.121 net/e1000: not in enabled drivers build config 00:02:30.121 net/ena: not in enabled drivers build config 00:02:30.121 net/enetc: not in enabled drivers build config 00:02:30.121 net/enetfec: not in enabled drivers build config 00:02:30.121 net/enic: not in enabled drivers build config 00:02:30.121 net/failsafe: not in enabled drivers build config 00:02:30.121 net/fm10k: not in enabled drivers build config 00:02:30.121 net/gve: not in enabled drivers build config 00:02:30.121 net/hinic: not in enabled drivers build config 00:02:30.121 net/hns3: not in enabled drivers build config 00:02:30.121 net/i40e: not in enabled drivers build config 00:02:30.121 net/iavf: not in enabled drivers build config 00:02:30.121 net/ice: not in enabled drivers build config 00:02:30.121 net/idpf: not in enabled drivers build config 00:02:30.121 net/igc: not in enabled drivers build config 00:02:30.121 net/ionic: not in enabled drivers build config 00:02:30.121 net/ipn3ke: not in enabled drivers build config 00:02:30.121 net/ixgbe: not in enabled drivers build config 00:02:30.121 net/mana: not in enabled drivers build config 00:02:30.121 net/memif: not in enabled drivers build config 00:02:30.121 net/mlx4: not in enabled drivers build config 00:02:30.121 net/mlx5: not in enabled drivers build config 00:02:30.121 net/mvneta: not in enabled drivers build config 00:02:30.121 net/mvpp2: not in enabled drivers build config 00:02:30.121 net/netvsc: not in enabled drivers build config 00:02:30.121 net/nfb: not in enabled drivers build config 00:02:30.121 net/nfp: not in enabled drivers build config 00:02:30.121 net/ngbe: not in enabled drivers build config 00:02:30.121 net/null: not in enabled drivers build config 00:02:30.121 net/octeontx: not in enabled drivers build config 00:02:30.121 net/octeon_ep: not in enabled drivers build config 00:02:30.121 net/pcap: not in enabled drivers build config 00:02:30.121 net/pfe: not in enabled drivers build config 00:02:30.121 net/qede: not in enabled drivers build config 00:02:30.121 net/ring: not in enabled drivers build config 00:02:30.121 net/sfc: not in enabled drivers build config 00:02:30.121 net/softnic: not in enabled drivers build config 00:02:30.121 net/tap: not in enabled drivers build config 00:02:30.121 net/thunderx: not in enabled drivers build config 00:02:30.121 net/txgbe: not in enabled drivers build config 00:02:30.121 net/vdev_netvsc: not in enabled drivers build config 00:02:30.121 net/vhost: not in enabled drivers build config 00:02:30.121 net/virtio: not in enabled drivers build config 00:02:30.121 net/vmxnet3: not in enabled drivers build config 00:02:30.121 raw/*: missing internal dependency, "rawdev" 00:02:30.121 crypto/armv8: not in enabled drivers build config 00:02:30.121 crypto/bcmfs: not in enabled drivers build config 00:02:30.121 crypto/caam_jr: not in enabled drivers build config 00:02:30.121 crypto/ccp: not in enabled drivers build config 00:02:30.121 crypto/cnxk: not in enabled drivers build config 00:02:30.121 crypto/dpaa_sec: not in enabled drivers build config 00:02:30.121 crypto/dpaa2_sec: not in enabled drivers build config 00:02:30.121 crypto/ipsec_mb: not in enabled drivers build config 00:02:30.121 crypto/mlx5: not in enabled drivers build config 00:02:30.121 crypto/mvsam: not in enabled drivers build config 00:02:30.121 crypto/nitrox: not in enabled drivers build config 00:02:30.121 crypto/null: not in enabled drivers build config 00:02:30.121 crypto/octeontx: not in enabled drivers build config 00:02:30.121 crypto/openssl: not in enabled drivers build config 00:02:30.121 crypto/scheduler: not in enabled drivers build config 00:02:30.121 crypto/uadk: not in enabled drivers build config 00:02:30.121 crypto/virtio: not in enabled drivers build config 00:02:30.121 compress/isal: not in enabled drivers build config 00:02:30.121 compress/mlx5: not in enabled drivers build config 00:02:30.121 compress/nitrox: not in enabled drivers build config 00:02:30.121 compress/octeontx: not in enabled drivers build config 00:02:30.121 compress/zlib: not in enabled drivers build config 00:02:30.121 regex/*: missing internal dependency, "regexdev" 00:02:30.121 ml/*: missing internal dependency, "mldev" 00:02:30.121 vdpa/ifc: not in enabled drivers build config 00:02:30.121 vdpa/mlx5: not in enabled drivers build config 00:02:30.121 vdpa/nfp: not in enabled drivers build config 00:02:30.121 vdpa/sfc: not in enabled drivers build config 00:02:30.121 event/*: missing internal dependency, "eventdev" 00:02:30.121 baseband/*: missing internal dependency, "bbdev" 00:02:30.121 gpu/*: missing internal dependency, "gpudev" 00:02:30.121 00:02:30.121 00:02:30.121 Build targets in project: 85 00:02:30.121 00:02:30.121 DPDK 24.03.0 00:02:30.121 00:02:30.121 User defined options 00:02:30.121 buildtype : debug 00:02:30.121 default_library : shared 00:02:30.121 libdir : lib 00:02:30.121 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:30.121 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:30.121 c_link_args : 00:02:30.121 cpu_instruction_set: native 00:02:30.121 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:30.121 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:30.121 enable_docs : false 00:02:30.121 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:30.121 enable_kmods : false 00:02:30.121 max_lcores : 128 00:02:30.121 tests : false 00:02:30.121 00:02:30.121 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:30.384 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:30.384 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:30.384 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:30.384 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:30.384 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:30.650 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:30.650 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:30.650 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:30.650 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:30.650 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:30.650 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:30.650 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:30.650 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:30.650 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:30.650 [14/268] Linking static target lib/librte_kvargs.a 00:02:30.650 [15/268] Linking static target lib/librte_log.a 00:02:30.650 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:30.650 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:30.650 [18/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:30.650 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:30.650 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:30.650 [21/268] Linking static target lib/librte_pci.a 00:02:30.650 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:30.650 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:30.911 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:30.911 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:30.911 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:30.911 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:30.911 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:30.911 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:30.911 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:30.911 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:30.911 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:30.911 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:30.911 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:30.911 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:30.911 [36/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:30.911 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:30.911 [38/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:30.911 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:30.911 [40/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:30.911 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:30.911 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:30.911 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:30.911 [44/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:30.911 [45/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:30.911 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:31.172 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:31.172 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:31.172 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:31.172 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:31.172 [51/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:31.172 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:31.172 [53/268] Linking static target lib/librte_meter.a 00:02:31.172 [54/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:31.172 [55/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:31.172 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:31.172 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:31.172 [58/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:31.172 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:31.172 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:31.172 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:31.172 [62/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:31.172 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:31.172 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:31.172 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:31.172 [66/268] Linking static target lib/librte_ring.a 00:02:31.172 [67/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:31.172 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:31.172 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:31.172 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:31.172 [71/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:31.172 [72/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:31.172 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:31.172 [74/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:31.172 [75/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:31.172 [76/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:31.172 [77/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.172 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:31.172 [79/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:31.172 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:31.172 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:31.172 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:31.172 [83/268] Linking static target lib/librte_telemetry.a 00:02:31.172 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:31.172 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:31.172 [86/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:31.172 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:31.172 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:31.172 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:31.172 [90/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.172 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:31.172 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:31.172 [93/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:31.173 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:31.173 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:31.173 [96/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:31.173 [97/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:31.173 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:31.173 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:31.173 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:31.173 [101/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:31.173 [102/268] Linking static target lib/librte_net.a 00:02:31.173 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:31.173 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:31.173 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:31.173 [106/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:31.173 [107/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:31.173 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:31.173 [109/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:31.173 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:31.173 [111/268] Linking static target lib/librte_rcu.a 00:02:31.173 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:31.173 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:31.173 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:31.173 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:31.173 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:31.173 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:31.173 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:31.173 [119/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:31.173 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:31.173 [121/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:31.173 [122/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:31.173 [123/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:31.173 [124/268] Linking static target lib/librte_mempool.a 00:02:31.173 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:31.173 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:31.173 [127/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:31.173 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:31.173 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:31.432 [130/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:31.432 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:31.432 [132/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:31.432 [133/268] Linking static target lib/librte_cmdline.a 00:02:31.432 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.432 [135/268] Linking static target lib/librte_mbuf.a 00:02:31.432 [136/268] Linking static target lib/librte_eal.a 00:02:31.432 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:31.432 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.432 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:31.432 [140/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.432 [141/268] Linking target lib/librte_log.so.24.1 00:02:31.432 [142/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.432 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:31.432 [144/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:31.432 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.432 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:31.432 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:31.432 [148/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:31.432 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:31.432 [150/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:31.432 [151/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:31.432 [152/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:31.432 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:31.432 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:31.432 [155/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:31.432 [156/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.432 [157/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:31.432 [158/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:31.432 [159/268] Linking target lib/librte_kvargs.so.24.1 00:02:31.432 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:31.432 [161/268] Linking static target lib/librte_timer.a 00:02:31.432 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:31.432 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:31.691 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:31.691 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:31.691 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:31.691 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:31.691 [168/268] Linking static target lib/librte_dmadev.a 00:02:31.691 [169/268] Linking target lib/librte_telemetry.so.24.1 00:02:31.691 [170/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:31.691 [171/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:31.691 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:31.691 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:31.691 [174/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:31.691 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:31.691 [176/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:31.691 [177/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:31.691 [178/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:31.691 [179/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:31.691 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:31.691 [181/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:31.691 [182/268] Linking static target lib/librte_compressdev.a 00:02:31.691 [183/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:31.691 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:31.691 [185/268] Linking static target lib/librte_security.a 00:02:31.691 [186/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:31.691 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:31.691 [188/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:31.691 [189/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:31.691 [190/268] Linking static target lib/librte_power.a 00:02:31.691 [191/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:31.691 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:31.691 [193/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:31.691 [194/268] Linking static target lib/librte_reorder.a 00:02:31.691 [195/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:31.691 [196/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:31.691 [197/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:31.691 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:31.691 [199/268] Linking static target drivers/librte_bus_vdev.a 00:02:31.691 [200/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:31.950 [201/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:31.950 [202/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:31.950 [203/268] Linking static target lib/librte_hash.a 00:02:31.950 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:31.950 [205/268] Linking static target drivers/librte_mempool_ring.a 00:02:31.950 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:31.950 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:31.950 [208/268] Linking static target drivers/librte_bus_pci.a 00:02:31.950 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:31.950 [210/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:31.950 [211/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:31.950 [212/268] Linking static target lib/librte_cryptodev.a 00:02:31.950 [213/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.950 [214/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.950 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.209 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.210 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.210 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.210 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.210 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:32.210 [221/268] Linking static target lib/librte_ethdev.a 00:02:32.210 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.210 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.469 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:32.469 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.469 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.727 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.662 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:33.662 [229/268] Linking static target lib/librte_vhost.a 00:02:33.662 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.043 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.352 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.920 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.920 [234/268] Linking target lib/librte_eal.so.24.1 00:02:40.920 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:41.178 [236/268] Linking target lib/librte_timer.so.24.1 00:02:41.178 [237/268] Linking target lib/librte_ring.so.24.1 00:02:41.178 [238/268] Linking target lib/librte_meter.so.24.1 00:02:41.178 [239/268] Linking target lib/librte_pci.so.24.1 00:02:41.178 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:41.178 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:41.178 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:41.178 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:41.178 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:41.178 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:41.178 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:41.178 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:41.178 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:41.178 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:41.435 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:41.435 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:41.435 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:41.435 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:41.435 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:41.693 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:41.693 [256/268] Linking target lib/librte_net.so.24.1 00:02:41.693 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:41.693 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:41.693 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:41.693 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:41.693 [261/268] Linking target lib/librte_security.so.24.1 00:02:41.693 [262/268] Linking target lib/librte_hash.so.24.1 00:02:41.693 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:41.693 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:41.952 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:41.952 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:41.952 [267/268] Linking target lib/librte_power.so.24.1 00:02:41.952 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:41.952 INFO: autodetecting backend as ninja 00:02:41.952 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:42.889 CC lib/ut_mock/mock.o 00:02:42.889 CC lib/ut/ut.o 00:02:42.889 CC lib/log/log.o 00:02:42.889 CC lib/log/log_flags.o 00:02:42.889 CC lib/log/log_deprecated.o 00:02:43.147 LIB libspdk_ut_mock.a 00:02:43.147 LIB libspdk_ut.a 00:02:43.147 SO libspdk_ut_mock.so.6.0 00:02:43.147 LIB libspdk_log.a 00:02:43.147 SO libspdk_ut.so.2.0 00:02:43.147 SO libspdk_log.so.7.0 00:02:43.147 SYMLINK libspdk_ut_mock.so 00:02:43.147 SYMLINK libspdk_ut.so 00:02:43.147 SYMLINK libspdk_log.so 00:02:43.406 CC lib/dma/dma.o 00:02:43.406 CC lib/ioat/ioat.o 00:02:43.406 CXX lib/trace_parser/trace.o 00:02:43.406 CC lib/util/base64.o 00:02:43.406 CC lib/util/bit_array.o 00:02:43.406 CC lib/util/cpuset.o 00:02:43.406 CC lib/util/crc32.o 00:02:43.406 CC lib/util/crc16.o 00:02:43.406 CC lib/util/crc32c.o 00:02:43.406 CC lib/util/crc32_ieee.o 00:02:43.406 CC lib/util/crc64.o 00:02:43.406 CC lib/util/dif.o 00:02:43.406 CC lib/util/fd.o 00:02:43.406 CC lib/util/file.o 00:02:43.406 CC lib/util/hexlify.o 00:02:43.406 CC lib/util/iov.o 00:02:43.406 CC lib/util/math.o 00:02:43.406 CC lib/util/pipe.o 00:02:43.406 CC lib/util/strerror_tls.o 00:02:43.406 CC lib/util/string.o 00:02:43.406 CC lib/util/uuid.o 00:02:43.406 CC lib/util/fd_group.o 00:02:43.664 CC lib/util/xor.o 00:02:43.664 CC lib/util/zipf.o 00:02:43.664 LIB libspdk_dma.a 00:02:43.664 CC lib/vfio_user/host/vfio_user_pci.o 00:02:43.664 CC lib/vfio_user/host/vfio_user.o 00:02:43.664 SO libspdk_dma.so.4.0 00:02:43.664 SYMLINK libspdk_dma.so 00:02:43.664 LIB libspdk_ioat.a 00:02:43.664 SO libspdk_ioat.so.7.0 00:02:43.923 SYMLINK libspdk_ioat.so 00:02:43.923 LIB libspdk_vfio_user.a 00:02:43.923 SO libspdk_vfio_user.so.5.0 00:02:43.923 LIB libspdk_util.a 00:02:43.923 SYMLINK libspdk_vfio_user.so 00:02:43.923 SO libspdk_util.so.9.1 00:02:44.182 SYMLINK libspdk_util.so 00:02:44.182 LIB libspdk_trace_parser.a 00:02:44.182 SO libspdk_trace_parser.so.5.0 00:02:44.440 SYMLINK libspdk_trace_parser.so 00:02:44.440 CC lib/json/json_parse.o 00:02:44.440 CC lib/json/json_util.o 00:02:44.440 CC lib/json/json_write.o 00:02:44.440 CC lib/conf/conf.o 00:02:44.440 CC lib/env_dpdk/env.o 00:02:44.440 CC lib/env_dpdk/memory.o 00:02:44.440 CC lib/env_dpdk/pci.o 00:02:44.440 CC lib/env_dpdk/init.o 00:02:44.440 CC lib/env_dpdk/threads.o 00:02:44.440 CC lib/env_dpdk/pci_ioat.o 00:02:44.440 CC lib/idxd/idxd.o 00:02:44.440 CC lib/env_dpdk/pci_virtio.o 00:02:44.440 CC lib/idxd/idxd_user.o 00:02:44.440 CC lib/env_dpdk/pci_vmd.o 00:02:44.440 CC lib/idxd/idxd_kernel.o 00:02:44.440 CC lib/rdma_provider/common.o 00:02:44.440 CC lib/env_dpdk/pci_idxd.o 00:02:44.440 CC lib/env_dpdk/pci_event.o 00:02:44.440 CC lib/env_dpdk/sigbus_handler.o 00:02:44.440 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:44.440 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:44.440 CC lib/env_dpdk/pci_dpdk.o 00:02:44.440 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:44.440 CC lib/vmd/vmd.o 00:02:44.440 CC lib/vmd/led.o 00:02:44.440 CC lib/rdma_utils/rdma_utils.o 00:02:44.699 LIB libspdk_rdma_provider.a 00:02:44.699 LIB libspdk_conf.a 00:02:44.699 LIB libspdk_json.a 00:02:44.699 SO libspdk_rdma_provider.so.6.0 00:02:44.699 SO libspdk_conf.so.6.0 00:02:44.699 SO libspdk_json.so.6.0 00:02:44.699 LIB libspdk_rdma_utils.a 00:02:44.699 SYMLINK libspdk_json.so 00:02:44.699 SYMLINK libspdk_rdma_provider.so 00:02:44.699 SYMLINK libspdk_conf.so 00:02:44.699 SO libspdk_rdma_utils.so.1.0 00:02:44.699 SYMLINK libspdk_rdma_utils.so 00:02:44.958 LIB libspdk_idxd.a 00:02:44.958 SO libspdk_idxd.so.12.0 00:02:44.958 LIB libspdk_vmd.a 00:02:44.958 CC lib/jsonrpc/jsonrpc_server.o 00:02:44.958 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:44.958 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:44.958 CC lib/jsonrpc/jsonrpc_client.o 00:02:44.958 SO libspdk_vmd.so.6.0 00:02:44.958 SYMLINK libspdk_idxd.so 00:02:44.958 SYMLINK libspdk_vmd.so 00:02:45.217 LIB libspdk_jsonrpc.a 00:02:45.217 SO libspdk_jsonrpc.so.6.0 00:02:45.217 SYMLINK libspdk_jsonrpc.so 00:02:45.476 LIB libspdk_env_dpdk.a 00:02:45.476 SO libspdk_env_dpdk.so.14.1 00:02:45.476 CC lib/rpc/rpc.o 00:02:45.476 SYMLINK libspdk_env_dpdk.so 00:02:45.734 LIB libspdk_rpc.a 00:02:45.734 SO libspdk_rpc.so.6.0 00:02:45.734 SYMLINK libspdk_rpc.so 00:02:45.992 CC lib/keyring/keyring.o 00:02:45.992 CC lib/trace/trace.o 00:02:45.992 CC lib/keyring/keyring_rpc.o 00:02:45.992 CC lib/trace/trace_flags.o 00:02:45.992 CC lib/trace/trace_rpc.o 00:02:45.992 CC lib/notify/notify.o 00:02:45.992 CC lib/notify/notify_rpc.o 00:02:46.252 LIB libspdk_notify.a 00:02:46.252 SO libspdk_notify.so.6.0 00:02:46.252 LIB libspdk_keyring.a 00:02:46.252 LIB libspdk_trace.a 00:02:46.252 SO libspdk_keyring.so.1.0 00:02:46.252 SO libspdk_trace.so.10.0 00:02:46.252 SYMLINK libspdk_notify.so 00:02:46.511 SYMLINK libspdk_keyring.so 00:02:46.511 SYMLINK libspdk_trace.so 00:02:46.771 CC lib/thread/thread.o 00:02:46.771 CC lib/thread/iobuf.o 00:02:46.771 CC lib/sock/sock.o 00:02:46.771 CC lib/sock/sock_rpc.o 00:02:47.030 LIB libspdk_sock.a 00:02:47.030 SO libspdk_sock.so.10.0 00:02:47.030 SYMLINK libspdk_sock.so 00:02:47.290 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:47.290 CC lib/nvme/nvme_ctrlr.o 00:02:47.290 CC lib/nvme/nvme_fabric.o 00:02:47.290 CC lib/nvme/nvme_ns_cmd.o 00:02:47.290 CC lib/nvme/nvme_ns.o 00:02:47.290 CC lib/nvme/nvme_pcie.o 00:02:47.290 CC lib/nvme/nvme_pcie_common.o 00:02:47.290 CC lib/nvme/nvme_qpair.o 00:02:47.290 CC lib/nvme/nvme.o 00:02:47.290 CC lib/nvme/nvme_quirks.o 00:02:47.290 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:47.290 CC lib/nvme/nvme_transport.o 00:02:47.290 CC lib/nvme/nvme_discovery.o 00:02:47.290 CC lib/nvme/nvme_tcp.o 00:02:47.290 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:47.290 CC lib/nvme/nvme_opal.o 00:02:47.290 CC lib/nvme/nvme_io_msg.o 00:02:47.290 CC lib/nvme/nvme_poll_group.o 00:02:47.290 CC lib/nvme/nvme_zns.o 00:02:47.290 CC lib/nvme/nvme_stubs.o 00:02:47.290 CC lib/nvme/nvme_auth.o 00:02:47.290 CC lib/nvme/nvme_cuse.o 00:02:47.290 CC lib/nvme/nvme_rdma.o 00:02:47.290 CC lib/nvme/nvme_vfio_user.o 00:02:47.856 LIB libspdk_thread.a 00:02:47.856 SO libspdk_thread.so.10.1 00:02:47.856 SYMLINK libspdk_thread.so 00:02:48.115 CC lib/accel/accel_rpc.o 00:02:48.115 CC lib/accel/accel.o 00:02:48.115 CC lib/accel/accel_sw.o 00:02:48.115 CC lib/virtio/virtio.o 00:02:48.115 CC lib/virtio/virtio_vfio_user.o 00:02:48.115 CC lib/blob/blobstore.o 00:02:48.115 CC lib/blob/request.o 00:02:48.115 CC lib/virtio/virtio_vhost_user.o 00:02:48.115 CC lib/vfu_tgt/tgt_endpoint.o 00:02:48.115 CC lib/vfu_tgt/tgt_rpc.o 00:02:48.115 CC lib/blob/zeroes.o 00:02:48.115 CC lib/blob/blob_bs_dev.o 00:02:48.115 CC lib/virtio/virtio_pci.o 00:02:48.115 CC lib/init/json_config.o 00:02:48.115 CC lib/init/subsystem.o 00:02:48.115 CC lib/init/subsystem_rpc.o 00:02:48.115 CC lib/init/rpc.o 00:02:48.373 LIB libspdk_init.a 00:02:48.373 SO libspdk_init.so.5.0 00:02:48.373 LIB libspdk_vfu_tgt.a 00:02:48.373 LIB libspdk_virtio.a 00:02:48.373 SO libspdk_vfu_tgt.so.3.0 00:02:48.373 SO libspdk_virtio.so.7.0 00:02:48.373 SYMLINK libspdk_init.so 00:02:48.373 SYMLINK libspdk_vfu_tgt.so 00:02:48.373 SYMLINK libspdk_virtio.so 00:02:48.632 CC lib/event/app.o 00:02:48.632 CC lib/event/reactor.o 00:02:48.632 CC lib/event/log_rpc.o 00:02:48.632 CC lib/event/app_rpc.o 00:02:48.632 CC lib/event/scheduler_static.o 00:02:48.892 LIB libspdk_accel.a 00:02:48.892 SO libspdk_accel.so.15.1 00:02:48.892 SYMLINK libspdk_accel.so 00:02:48.892 LIB libspdk_nvme.a 00:02:48.892 LIB libspdk_event.a 00:02:49.150 SO libspdk_event.so.14.0 00:02:49.150 SO libspdk_nvme.so.13.1 00:02:49.150 SYMLINK libspdk_event.so 00:02:49.150 CC lib/bdev/bdev.o 00:02:49.150 CC lib/bdev/bdev_rpc.o 00:02:49.150 CC lib/bdev/part.o 00:02:49.150 CC lib/bdev/bdev_zone.o 00:02:49.150 CC lib/bdev/scsi_nvme.o 00:02:49.409 SYMLINK libspdk_nvme.so 00:02:50.346 LIB libspdk_blob.a 00:02:50.346 SO libspdk_blob.so.11.0 00:02:50.346 SYMLINK libspdk_blob.so 00:02:50.605 CC lib/lvol/lvol.o 00:02:50.605 CC lib/blobfs/blobfs.o 00:02:50.605 CC lib/blobfs/tree.o 00:02:50.864 LIB libspdk_bdev.a 00:02:51.123 SO libspdk_bdev.so.15.1 00:02:51.123 LIB libspdk_blobfs.a 00:02:51.123 SYMLINK libspdk_bdev.so 00:02:51.123 SO libspdk_blobfs.so.10.0 00:02:51.123 LIB libspdk_lvol.a 00:02:51.123 SYMLINK libspdk_blobfs.so 00:02:51.123 SO libspdk_lvol.so.10.0 00:02:51.381 SYMLINK libspdk_lvol.so 00:02:51.381 CC lib/nbd/nbd.o 00:02:51.381 CC lib/nbd/nbd_rpc.o 00:02:51.381 CC lib/ftl/ftl_core.o 00:02:51.381 CC lib/ftl/ftl_init.o 00:02:51.381 CC lib/ftl/ftl_layout.o 00:02:51.381 CC lib/ftl/ftl_debug.o 00:02:51.381 CC lib/ftl/ftl_io.o 00:02:51.381 CC lib/ftl/ftl_sb.o 00:02:51.381 CC lib/ftl/ftl_l2p.o 00:02:51.381 CC lib/ftl/ftl_l2p_flat.o 00:02:51.381 CC lib/ftl/ftl_nv_cache.o 00:02:51.381 CC lib/ftl/ftl_writer.o 00:02:51.381 CC lib/nvmf/ctrlr.o 00:02:51.381 CC lib/ftl/ftl_band.o 00:02:51.381 CC lib/nvmf/ctrlr_discovery.o 00:02:51.381 CC lib/ftl/ftl_band_ops.o 00:02:51.381 CC lib/nvmf/ctrlr_bdev.o 00:02:51.381 CC lib/ftl/ftl_rq.o 00:02:51.381 CC lib/nvmf/subsystem.o 00:02:51.381 CC lib/nvmf/nvmf.o 00:02:51.381 CC lib/nvmf/transport.o 00:02:51.381 CC lib/ftl/ftl_reloc.o 00:02:51.381 CC lib/nvmf/nvmf_rpc.o 00:02:51.381 CC lib/ftl/ftl_l2p_cache.o 00:02:51.381 CC lib/ftl/ftl_p2l.o 00:02:51.381 CC lib/nvmf/tcp.o 00:02:51.381 CC lib/ftl/mngt/ftl_mngt.o 00:02:51.381 CC lib/nvmf/mdns_server.o 00:02:51.381 CC lib/nvmf/stubs.o 00:02:51.381 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:51.381 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:51.381 CC lib/scsi/dev.o 00:02:51.381 CC lib/nvmf/vfio_user.o 00:02:51.381 CC lib/nvmf/rdma.o 00:02:51.381 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:51.381 CC lib/ublk/ublk.o 00:02:51.381 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:51.381 CC lib/scsi/lun.o 00:02:51.381 CC lib/nvmf/auth.o 00:02:51.381 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:51.381 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:51.381 CC lib/scsi/scsi_bdev.o 00:02:51.381 CC lib/scsi/port.o 00:02:51.381 CC lib/ublk/ublk_rpc.o 00:02:51.381 CC lib/scsi/scsi.o 00:02:51.381 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:51.381 CC lib/scsi/scsi_pr.o 00:02:51.381 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:51.381 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:51.381 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:51.381 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:51.381 CC lib/scsi/scsi_rpc.o 00:02:51.381 CC lib/scsi/task.o 00:02:51.381 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:51.381 CC lib/ftl/utils/ftl_conf.o 00:02:51.381 CC lib/ftl/utils/ftl_md.o 00:02:51.381 CC lib/ftl/utils/ftl_mempool.o 00:02:51.381 CC lib/ftl/utils/ftl_bitmap.o 00:02:51.381 CC lib/ftl/utils/ftl_property.o 00:02:51.381 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:51.381 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:51.381 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:51.381 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:51.381 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:51.381 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:51.381 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:51.381 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:51.381 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:51.381 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:51.381 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:51.381 CC lib/ftl/base/ftl_base_bdev.o 00:02:51.381 CC lib/ftl/base/ftl_base_dev.o 00:02:51.381 CC lib/ftl/ftl_trace.o 00:02:51.947 LIB libspdk_nbd.a 00:02:51.947 SO libspdk_nbd.so.7.0 00:02:51.947 SYMLINK libspdk_nbd.so 00:02:51.947 LIB libspdk_scsi.a 00:02:51.947 SO libspdk_scsi.so.9.0 00:02:51.947 SYMLINK libspdk_scsi.so 00:02:52.204 LIB libspdk_ublk.a 00:02:52.205 SO libspdk_ublk.so.3.0 00:02:52.205 SYMLINK libspdk_ublk.so 00:02:52.463 CC lib/vhost/vhost.o 00:02:52.463 CC lib/vhost/vhost_rpc.o 00:02:52.463 CC lib/vhost/rte_vhost_user.o 00:02:52.463 CC lib/vhost/vhost_scsi.o 00:02:52.463 CC lib/vhost/vhost_blk.o 00:02:52.463 CC lib/iscsi/conn.o 00:02:52.463 CC lib/iscsi/init_grp.o 00:02:52.463 CC lib/iscsi/iscsi.o 00:02:52.463 CC lib/iscsi/md5.o 00:02:52.463 CC lib/iscsi/param.o 00:02:52.463 CC lib/iscsi/portal_grp.o 00:02:52.463 CC lib/iscsi/tgt_node.o 00:02:52.463 CC lib/iscsi/iscsi_subsystem.o 00:02:52.463 CC lib/iscsi/iscsi_rpc.o 00:02:52.463 CC lib/iscsi/task.o 00:02:52.463 LIB libspdk_ftl.a 00:02:52.463 SO libspdk_ftl.so.9.0 00:02:53.029 SYMLINK libspdk_ftl.so 00:02:53.029 LIB libspdk_nvmf.a 00:02:53.029 SO libspdk_nvmf.so.18.1 00:02:53.029 LIB libspdk_vhost.a 00:02:53.288 SO libspdk_vhost.so.8.0 00:02:53.288 SYMLINK libspdk_nvmf.so 00:02:53.288 SYMLINK libspdk_vhost.so 00:02:53.288 LIB libspdk_iscsi.a 00:02:53.288 SO libspdk_iscsi.so.8.0 00:02:53.546 SYMLINK libspdk_iscsi.so 00:02:54.154 CC module/env_dpdk/env_dpdk_rpc.o 00:02:54.154 CC module/vfu_device/vfu_virtio.o 00:02:54.154 CC module/vfu_device/vfu_virtio_blk.o 00:02:54.154 CC module/vfu_device/vfu_virtio_scsi.o 00:02:54.154 CC module/vfu_device/vfu_virtio_rpc.o 00:02:54.154 CC module/blob/bdev/blob_bdev.o 00:02:54.154 CC module/accel/error/accel_error.o 00:02:54.154 CC module/accel/error/accel_error_rpc.o 00:02:54.154 CC module/accel/dsa/accel_dsa.o 00:02:54.154 CC module/accel/dsa/accel_dsa_rpc.o 00:02:54.154 LIB libspdk_env_dpdk_rpc.a 00:02:54.154 CC module/keyring/file/keyring.o 00:02:54.154 CC module/keyring/linux/keyring.o 00:02:54.154 CC module/keyring/linux/keyring_rpc.o 00:02:54.154 CC module/sock/posix/posix.o 00:02:54.154 CC module/scheduler/gscheduler/gscheduler.o 00:02:54.154 CC module/keyring/file/keyring_rpc.o 00:02:54.154 CC module/accel/ioat/accel_ioat.o 00:02:54.154 CC module/accel/ioat/accel_ioat_rpc.o 00:02:54.154 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:54.154 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:54.154 CC module/accel/iaa/accel_iaa.o 00:02:54.154 CC module/accel/iaa/accel_iaa_rpc.o 00:02:54.154 SO libspdk_env_dpdk_rpc.so.6.0 00:02:54.154 SYMLINK libspdk_env_dpdk_rpc.so 00:02:54.154 LIB libspdk_scheduler_gscheduler.a 00:02:54.154 LIB libspdk_keyring_linux.a 00:02:54.154 LIB libspdk_scheduler_dpdk_governor.a 00:02:54.154 LIB libspdk_accel_error.a 00:02:54.412 LIB libspdk_keyring_file.a 00:02:54.412 SO libspdk_scheduler_gscheduler.so.4.0 00:02:54.412 LIB libspdk_accel_ioat.a 00:02:54.412 SO libspdk_keyring_file.so.1.0 00:02:54.412 SO libspdk_keyring_linux.so.1.0 00:02:54.412 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:54.412 SO libspdk_accel_error.so.2.0 00:02:54.412 LIB libspdk_scheduler_dynamic.a 00:02:54.412 LIB libspdk_blob_bdev.a 00:02:54.412 LIB libspdk_accel_dsa.a 00:02:54.412 SO libspdk_accel_ioat.so.6.0 00:02:54.412 LIB libspdk_accel_iaa.a 00:02:54.412 SYMLINK libspdk_keyring_file.so 00:02:54.412 SYMLINK libspdk_keyring_linux.so 00:02:54.412 SYMLINK libspdk_scheduler_gscheduler.so 00:02:54.412 SO libspdk_scheduler_dynamic.so.4.0 00:02:54.412 SO libspdk_accel_dsa.so.5.0 00:02:54.412 SO libspdk_blob_bdev.so.11.0 00:02:54.412 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:54.412 SYMLINK libspdk_accel_error.so 00:02:54.412 SO libspdk_accel_iaa.so.3.0 00:02:54.412 SYMLINK libspdk_accel_ioat.so 00:02:54.412 SYMLINK libspdk_scheduler_dynamic.so 00:02:54.412 SYMLINK libspdk_accel_dsa.so 00:02:54.412 SYMLINK libspdk_blob_bdev.so 00:02:54.412 SYMLINK libspdk_accel_iaa.so 00:02:54.412 LIB libspdk_vfu_device.a 00:02:54.412 SO libspdk_vfu_device.so.3.0 00:02:54.671 SYMLINK libspdk_vfu_device.so 00:02:54.671 LIB libspdk_sock_posix.a 00:02:54.671 SO libspdk_sock_posix.so.6.0 00:02:54.929 SYMLINK libspdk_sock_posix.so 00:02:54.929 CC module/bdev/lvol/vbdev_lvol.o 00:02:54.929 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:54.929 CC module/bdev/ftl/bdev_ftl.o 00:02:54.929 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:54.929 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:54.929 CC module/bdev/malloc/bdev_malloc.o 00:02:54.929 CC module/bdev/gpt/gpt.o 00:02:54.929 CC module/bdev/raid/bdev_raid.o 00:02:54.929 CC module/bdev/nvme/bdev_nvme.o 00:02:54.929 CC module/bdev/nvme/nvme_rpc.o 00:02:54.929 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:54.929 CC module/bdev/gpt/vbdev_gpt.o 00:02:54.929 CC module/bdev/raid/bdev_raid_rpc.o 00:02:54.929 CC module/bdev/nvme/bdev_mdns_client.o 00:02:54.929 CC module/bdev/nvme/vbdev_opal.o 00:02:54.929 CC module/bdev/raid/bdev_raid_sb.o 00:02:54.929 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:54.929 CC module/bdev/raid/raid0.o 00:02:54.929 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:54.929 CC module/bdev/error/vbdev_error.o 00:02:54.929 CC module/bdev/raid/raid1.o 00:02:54.929 CC module/bdev/raid/concat.o 00:02:54.929 CC module/bdev/error/vbdev_error_rpc.o 00:02:54.929 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:54.929 CC module/blobfs/bdev/blobfs_bdev.o 00:02:54.929 CC module/bdev/passthru/vbdev_passthru.o 00:02:54.929 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:54.929 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:54.929 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:54.929 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:54.929 CC module/bdev/null/bdev_null.o 00:02:54.929 CC module/bdev/null/bdev_null_rpc.o 00:02:54.929 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:54.929 CC module/bdev/iscsi/bdev_iscsi.o 00:02:54.929 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:54.929 CC module/bdev/delay/vbdev_delay.o 00:02:54.929 CC module/bdev/split/vbdev_split.o 00:02:54.929 CC module/bdev/split/vbdev_split_rpc.o 00:02:54.929 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:54.929 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:54.929 CC module/bdev/aio/bdev_aio.o 00:02:54.929 CC module/bdev/aio/bdev_aio_rpc.o 00:02:55.218 LIB libspdk_blobfs_bdev.a 00:02:55.218 LIB libspdk_bdev_split.a 00:02:55.218 SO libspdk_blobfs_bdev.so.6.0 00:02:55.218 LIB libspdk_bdev_ftl.a 00:02:55.218 LIB libspdk_bdev_error.a 00:02:55.218 LIB libspdk_bdev_gpt.a 00:02:55.218 SO libspdk_bdev_split.so.6.0 00:02:55.218 LIB libspdk_bdev_passthru.a 00:02:55.218 LIB libspdk_bdev_null.a 00:02:55.218 SO libspdk_bdev_ftl.so.6.0 00:02:55.218 SO libspdk_bdev_error.so.6.0 00:02:55.218 SO libspdk_bdev_gpt.so.6.0 00:02:55.218 SYMLINK libspdk_blobfs_bdev.so 00:02:55.218 SO libspdk_bdev_passthru.so.6.0 00:02:55.218 SO libspdk_bdev_null.so.6.0 00:02:55.218 LIB libspdk_bdev_aio.a 00:02:55.218 SYMLINK libspdk_bdev_split.so 00:02:55.218 LIB libspdk_bdev_zone_block.a 00:02:55.218 SYMLINK libspdk_bdev_gpt.so 00:02:55.218 LIB libspdk_bdev_malloc.a 00:02:55.218 SYMLINK libspdk_bdev_error.so 00:02:55.218 SYMLINK libspdk_bdev_ftl.so 00:02:55.218 SO libspdk_bdev_zone_block.so.6.0 00:02:55.218 LIB libspdk_bdev_iscsi.a 00:02:55.218 SO libspdk_bdev_aio.so.6.0 00:02:55.218 LIB libspdk_bdev_delay.a 00:02:55.218 SYMLINK libspdk_bdev_passthru.so 00:02:55.218 SO libspdk_bdev_malloc.so.6.0 00:02:55.218 SYMLINK libspdk_bdev_null.so 00:02:55.218 SO libspdk_bdev_delay.so.6.0 00:02:55.218 SO libspdk_bdev_iscsi.so.6.0 00:02:55.218 SYMLINK libspdk_bdev_zone_block.so 00:02:55.476 SYMLINK libspdk_bdev_aio.so 00:02:55.476 SYMLINK libspdk_bdev_malloc.so 00:02:55.476 LIB libspdk_bdev_lvol.a 00:02:55.476 SYMLINK libspdk_bdev_delay.so 00:02:55.476 SYMLINK libspdk_bdev_iscsi.so 00:02:55.476 LIB libspdk_bdev_virtio.a 00:02:55.476 SO libspdk_bdev_lvol.so.6.0 00:02:55.476 SO libspdk_bdev_virtio.so.6.0 00:02:55.476 SYMLINK libspdk_bdev_lvol.so 00:02:55.476 SYMLINK libspdk_bdev_virtio.so 00:02:55.735 LIB libspdk_bdev_raid.a 00:02:55.735 SO libspdk_bdev_raid.so.6.0 00:02:55.735 SYMLINK libspdk_bdev_raid.so 00:02:56.671 LIB libspdk_bdev_nvme.a 00:02:56.671 SO libspdk_bdev_nvme.so.7.0 00:02:56.671 SYMLINK libspdk_bdev_nvme.so 00:02:57.238 CC module/event/subsystems/vmd/vmd.o 00:02:57.238 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:57.238 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:57.238 CC module/event/subsystems/iobuf/iobuf.o 00:02:57.238 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:57.238 CC module/event/subsystems/sock/sock.o 00:02:57.238 CC module/event/subsystems/scheduler/scheduler.o 00:02:57.238 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:57.238 CC module/event/subsystems/keyring/keyring.o 00:02:57.238 LIB libspdk_event_vmd.a 00:02:57.238 LIB libspdk_event_vhost_blk.a 00:02:57.238 LIB libspdk_event_iobuf.a 00:02:57.238 LIB libspdk_event_scheduler.a 00:02:57.238 SO libspdk_event_vmd.so.6.0 00:02:57.238 SO libspdk_event_vhost_blk.so.3.0 00:02:57.238 LIB libspdk_event_sock.a 00:02:57.497 LIB libspdk_event_keyring.a 00:02:57.497 SO libspdk_event_iobuf.so.3.0 00:02:57.497 LIB libspdk_event_vfu_tgt.a 00:02:57.497 SO libspdk_event_scheduler.so.4.0 00:02:57.497 SO libspdk_event_sock.so.5.0 00:02:57.497 SO libspdk_event_keyring.so.1.0 00:02:57.497 SYMLINK libspdk_event_vmd.so 00:02:57.497 SYMLINK libspdk_event_vhost_blk.so 00:02:57.497 SO libspdk_event_vfu_tgt.so.3.0 00:02:57.497 SYMLINK libspdk_event_iobuf.so 00:02:57.497 SYMLINK libspdk_event_scheduler.so 00:02:57.497 SYMLINK libspdk_event_sock.so 00:02:57.497 SYMLINK libspdk_event_keyring.so 00:02:57.497 SYMLINK libspdk_event_vfu_tgt.so 00:02:57.756 CC module/event/subsystems/accel/accel.o 00:02:57.756 LIB libspdk_event_accel.a 00:02:58.014 SO libspdk_event_accel.so.6.0 00:02:58.014 SYMLINK libspdk_event_accel.so 00:02:58.286 CC module/event/subsystems/bdev/bdev.o 00:02:58.544 LIB libspdk_event_bdev.a 00:02:58.544 SO libspdk_event_bdev.so.6.0 00:02:58.544 SYMLINK libspdk_event_bdev.so 00:02:58.803 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:58.803 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:58.803 CC module/event/subsystems/nbd/nbd.o 00:02:58.803 CC module/event/subsystems/ublk/ublk.o 00:02:58.803 CC module/event/subsystems/scsi/scsi.o 00:02:59.061 LIB libspdk_event_nbd.a 00:02:59.061 LIB libspdk_event_ublk.a 00:02:59.061 LIB libspdk_event_scsi.a 00:02:59.061 SO libspdk_event_nbd.so.6.0 00:02:59.061 LIB libspdk_event_nvmf.a 00:02:59.061 SO libspdk_event_ublk.so.3.0 00:02:59.061 SO libspdk_event_scsi.so.6.0 00:02:59.061 SO libspdk_event_nvmf.so.6.0 00:02:59.061 SYMLINK libspdk_event_nbd.so 00:02:59.062 SYMLINK libspdk_event_ublk.so 00:02:59.062 SYMLINK libspdk_event_scsi.so 00:02:59.062 SYMLINK libspdk_event_nvmf.so 00:02:59.319 CC module/event/subsystems/iscsi/iscsi.o 00:02:59.319 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:59.579 LIB libspdk_event_vhost_scsi.a 00:02:59.579 LIB libspdk_event_iscsi.a 00:02:59.579 SO libspdk_event_vhost_scsi.so.3.0 00:02:59.579 SO libspdk_event_iscsi.so.6.0 00:02:59.579 SYMLINK libspdk_event_vhost_scsi.so 00:02:59.579 SYMLINK libspdk_event_iscsi.so 00:02:59.838 SO libspdk.so.6.0 00:02:59.838 SYMLINK libspdk.so 00:03:00.097 CXX app/trace/trace.o 00:03:00.097 CC app/trace_record/trace_record.o 00:03:00.097 CC app/spdk_top/spdk_top.o 00:03:00.097 CC app/spdk_nvme_identify/identify.o 00:03:00.097 CC app/spdk_lspci/spdk_lspci.o 00:03:00.097 CC app/spdk_nvme_perf/perf.o 00:03:00.097 CC app/spdk_nvme_discover/discovery_aer.o 00:03:00.097 CC test/rpc_client/rpc_client_test.o 00:03:00.097 CC app/spdk_dd/spdk_dd.o 00:03:00.097 CC app/iscsi_tgt/iscsi_tgt.o 00:03:00.097 TEST_HEADER include/spdk/assert.h 00:03:00.097 TEST_HEADER include/spdk/accel.h 00:03:00.097 TEST_HEADER include/spdk/base64.h 00:03:00.097 TEST_HEADER include/spdk/accel_module.h 00:03:00.097 TEST_HEADER include/spdk/bdev_module.h 00:03:00.097 TEST_HEADER include/spdk/bdev.h 00:03:00.097 TEST_HEADER include/spdk/bit_array.h 00:03:00.097 TEST_HEADER include/spdk/barrier.h 00:03:00.097 TEST_HEADER include/spdk/bdev_zone.h 00:03:00.097 TEST_HEADER include/spdk/blob_bdev.h 00:03:00.097 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:00.097 TEST_HEADER include/spdk/bit_pool.h 00:03:00.097 TEST_HEADER include/spdk/blobfs.h 00:03:00.097 TEST_HEADER include/spdk/blob.h 00:03:00.097 TEST_HEADER include/spdk/config.h 00:03:00.097 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:00.097 TEST_HEADER include/spdk/conf.h 00:03:00.097 TEST_HEADER include/spdk/cpuset.h 00:03:00.097 TEST_HEADER include/spdk/crc16.h 00:03:00.097 TEST_HEADER include/spdk/crc64.h 00:03:00.097 TEST_HEADER include/spdk/crc32.h 00:03:00.097 TEST_HEADER include/spdk/dif.h 00:03:00.097 CC app/nvmf_tgt/nvmf_main.o 00:03:00.097 TEST_HEADER include/spdk/endian.h 00:03:00.097 TEST_HEADER include/spdk/env_dpdk.h 00:03:00.097 TEST_HEADER include/spdk/dma.h 00:03:00.097 TEST_HEADER include/spdk/env.h 00:03:00.097 TEST_HEADER include/spdk/fd_group.h 00:03:00.097 TEST_HEADER include/spdk/event.h 00:03:00.097 TEST_HEADER include/spdk/fd.h 00:03:00.097 TEST_HEADER include/spdk/file.h 00:03:00.097 TEST_HEADER include/spdk/ftl.h 00:03:00.097 TEST_HEADER include/spdk/gpt_spec.h 00:03:00.097 TEST_HEADER include/spdk/hexlify.h 00:03:00.097 TEST_HEADER include/spdk/histogram_data.h 00:03:00.097 TEST_HEADER include/spdk/idxd_spec.h 00:03:00.097 TEST_HEADER include/spdk/ioat.h 00:03:00.097 TEST_HEADER include/spdk/idxd.h 00:03:00.097 TEST_HEADER include/spdk/init.h 00:03:00.097 TEST_HEADER include/spdk/ioat_spec.h 00:03:00.097 TEST_HEADER include/spdk/iscsi_spec.h 00:03:00.097 TEST_HEADER include/spdk/json.h 00:03:00.097 TEST_HEADER include/spdk/jsonrpc.h 00:03:00.097 TEST_HEADER include/spdk/keyring.h 00:03:00.097 TEST_HEADER include/spdk/likely.h 00:03:00.097 CC app/spdk_tgt/spdk_tgt.o 00:03:00.097 TEST_HEADER include/spdk/keyring_module.h 00:03:00.097 TEST_HEADER include/spdk/lvol.h 00:03:00.097 TEST_HEADER include/spdk/mmio.h 00:03:00.097 TEST_HEADER include/spdk/log.h 00:03:00.097 TEST_HEADER include/spdk/notify.h 00:03:00.097 TEST_HEADER include/spdk/memory.h 00:03:00.097 TEST_HEADER include/spdk/nbd.h 00:03:00.097 TEST_HEADER include/spdk/nvme_intel.h 00:03:00.097 TEST_HEADER include/spdk/nvme.h 00:03:00.097 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:00.097 TEST_HEADER include/spdk/nvme_zns.h 00:03:00.097 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:00.097 TEST_HEADER include/spdk/nvme_spec.h 00:03:00.097 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:00.097 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:00.097 TEST_HEADER include/spdk/nvmf.h 00:03:00.097 TEST_HEADER include/spdk/nvmf_transport.h 00:03:00.097 TEST_HEADER include/spdk/opal.h 00:03:00.097 TEST_HEADER include/spdk/opal_spec.h 00:03:00.097 TEST_HEADER include/spdk/pci_ids.h 00:03:00.097 TEST_HEADER include/spdk/queue.h 00:03:00.097 TEST_HEADER include/spdk/nvmf_spec.h 00:03:00.097 TEST_HEADER include/spdk/pipe.h 00:03:00.098 TEST_HEADER include/spdk/rpc.h 00:03:00.098 TEST_HEADER include/spdk/scheduler.h 00:03:00.098 TEST_HEADER include/spdk/reduce.h 00:03:00.098 TEST_HEADER include/spdk/scsi.h 00:03:00.098 TEST_HEADER include/spdk/scsi_spec.h 00:03:00.098 TEST_HEADER include/spdk/sock.h 00:03:00.098 TEST_HEADER include/spdk/stdinc.h 00:03:00.098 TEST_HEADER include/spdk/thread.h 00:03:00.098 TEST_HEADER include/spdk/string.h 00:03:00.098 TEST_HEADER include/spdk/trace.h 00:03:00.098 TEST_HEADER include/spdk/trace_parser.h 00:03:00.098 TEST_HEADER include/spdk/ublk.h 00:03:00.098 TEST_HEADER include/spdk/util.h 00:03:00.098 TEST_HEADER include/spdk/tree.h 00:03:00.098 TEST_HEADER include/spdk/uuid.h 00:03:00.098 TEST_HEADER include/spdk/version.h 00:03:00.098 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:00.098 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:00.098 TEST_HEADER include/spdk/vhost.h 00:03:00.098 TEST_HEADER include/spdk/vmd.h 00:03:00.098 TEST_HEADER include/spdk/xor.h 00:03:00.098 CXX test/cpp_headers/accel_module.o 00:03:00.098 TEST_HEADER include/spdk/zipf.h 00:03:00.098 CXX test/cpp_headers/accel.o 00:03:00.098 CXX test/cpp_headers/barrier.o 00:03:00.098 CXX test/cpp_headers/assert.o 00:03:00.098 CXX test/cpp_headers/base64.o 00:03:00.098 CXX test/cpp_headers/bdev.o 00:03:00.098 CXX test/cpp_headers/bdev_module.o 00:03:00.098 CXX test/cpp_headers/bit_array.o 00:03:00.098 CXX test/cpp_headers/bit_pool.o 00:03:00.098 CXX test/cpp_headers/bdev_zone.o 00:03:00.098 CXX test/cpp_headers/blob_bdev.o 00:03:00.098 CXX test/cpp_headers/blobfs_bdev.o 00:03:00.098 CXX test/cpp_headers/conf.o 00:03:00.098 CXX test/cpp_headers/config.o 00:03:00.098 CXX test/cpp_headers/blob.o 00:03:00.098 CXX test/cpp_headers/blobfs.o 00:03:00.098 CXX test/cpp_headers/cpuset.o 00:03:00.098 CXX test/cpp_headers/crc64.o 00:03:00.098 CXX test/cpp_headers/crc32.o 00:03:00.098 CXX test/cpp_headers/crc16.o 00:03:00.098 CXX test/cpp_headers/dif.o 00:03:00.098 CXX test/cpp_headers/dma.o 00:03:00.098 CXX test/cpp_headers/env_dpdk.o 00:03:00.098 CXX test/cpp_headers/env.o 00:03:00.098 CXX test/cpp_headers/fd.o 00:03:00.098 CXX test/cpp_headers/fd_group.o 00:03:00.098 CXX test/cpp_headers/endian.o 00:03:00.098 CXX test/cpp_headers/event.o 00:03:00.098 CXX test/cpp_headers/file.o 00:03:00.098 CXX test/cpp_headers/ftl.o 00:03:00.098 CXX test/cpp_headers/hexlify.o 00:03:00.098 CXX test/cpp_headers/gpt_spec.o 00:03:00.098 CXX test/cpp_headers/histogram_data.o 00:03:00.098 CXX test/cpp_headers/ioat.o 00:03:00.098 CXX test/cpp_headers/idxd_spec.o 00:03:00.098 CXX test/cpp_headers/idxd.o 00:03:00.098 CXX test/cpp_headers/init.o 00:03:00.098 CXX test/cpp_headers/iscsi_spec.o 00:03:00.098 CXX test/cpp_headers/ioat_spec.o 00:03:00.098 CXX test/cpp_headers/jsonrpc.o 00:03:00.098 CXX test/cpp_headers/json.o 00:03:00.098 CXX test/cpp_headers/keyring_module.o 00:03:00.098 CXX test/cpp_headers/keyring.o 00:03:00.098 CXX test/cpp_headers/likely.o 00:03:00.098 CXX test/cpp_headers/lvol.o 00:03:00.098 CXX test/cpp_headers/memory.o 00:03:00.098 CXX test/cpp_headers/log.o 00:03:00.098 CXX test/cpp_headers/nbd.o 00:03:00.098 CXX test/cpp_headers/mmio.o 00:03:00.098 CXX test/cpp_headers/nvme_intel.o 00:03:00.098 CXX test/cpp_headers/notify.o 00:03:00.098 CXX test/cpp_headers/nvme.o 00:03:00.374 CXX test/cpp_headers/nvme_spec.o 00:03:00.374 CXX test/cpp_headers/nvme_zns.o 00:03:00.374 CXX test/cpp_headers/nvme_ocssd.o 00:03:00.374 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:00.374 CXX test/cpp_headers/nvmf_cmd.o 00:03:00.374 CC examples/util/zipf/zipf.o 00:03:00.374 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:00.374 CXX test/cpp_headers/nvmf_spec.o 00:03:00.374 CXX test/cpp_headers/nvmf.o 00:03:00.374 CXX test/cpp_headers/nvmf_transport.o 00:03:00.374 CXX test/cpp_headers/opal.o 00:03:00.374 CXX test/cpp_headers/opal_spec.o 00:03:00.374 CXX test/cpp_headers/pipe.o 00:03:00.374 CXX test/cpp_headers/pci_ids.o 00:03:00.374 CXX test/cpp_headers/queue.o 00:03:00.374 CXX test/cpp_headers/reduce.o 00:03:00.374 CC app/fio/nvme/fio_plugin.o 00:03:00.374 CC examples/ioat/verify/verify.o 00:03:00.374 CC test/app/stub/stub.o 00:03:00.374 CC examples/ioat/perf/perf.o 00:03:00.374 CC test/app/histogram_perf/histogram_perf.o 00:03:00.374 CC test/env/pci/pci_ut.o 00:03:00.374 CC test/thread/poller_perf/poller_perf.o 00:03:00.374 CXX test/cpp_headers/rpc.o 00:03:00.374 LINK spdk_lspci 00:03:00.374 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:00.374 CC test/env/memory/memory_ut.o 00:03:00.374 CC test/app/bdev_svc/bdev_svc.o 00:03:00.374 CC app/fio/bdev/fio_plugin.o 00:03:00.374 CC test/app/jsoncat/jsoncat.o 00:03:00.374 CC test/env/vtophys/vtophys.o 00:03:00.374 CC test/dma/test_dma/test_dma.o 00:03:00.374 LINK spdk_nvme_discover 00:03:00.636 LINK spdk_trace_record 00:03:00.636 LINK interrupt_tgt 00:03:00.636 LINK nvmf_tgt 00:03:00.636 LINK rpc_client_test 00:03:00.895 LINK iscsi_tgt 00:03:00.895 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:00.895 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:00.895 LINK poller_perf 00:03:00.895 LINK stub 00:03:00.895 CXX test/cpp_headers/scheduler.o 00:03:00.895 CXX test/cpp_headers/scsi.o 00:03:00.895 CC test/env/mem_callbacks/mem_callbacks.o 00:03:00.895 CXX test/cpp_headers/scsi_spec.o 00:03:00.895 CXX test/cpp_headers/sock.o 00:03:00.895 LINK vtophys 00:03:00.895 LINK jsoncat 00:03:00.895 CXX test/cpp_headers/string.o 00:03:00.895 CXX test/cpp_headers/thread.o 00:03:00.895 CXX test/cpp_headers/stdinc.o 00:03:00.895 CXX test/cpp_headers/trace.o 00:03:00.895 CXX test/cpp_headers/trace_parser.o 00:03:00.895 CXX test/cpp_headers/tree.o 00:03:00.895 CXX test/cpp_headers/ublk.o 00:03:00.895 CXX test/cpp_headers/util.o 00:03:00.895 CXX test/cpp_headers/uuid.o 00:03:00.895 CXX test/cpp_headers/version.o 00:03:00.895 CXX test/cpp_headers/vfio_user_pci.o 00:03:00.895 CXX test/cpp_headers/vfio_user_spec.o 00:03:00.895 LINK spdk_tgt 00:03:00.895 CXX test/cpp_headers/vhost.o 00:03:00.895 LINK zipf 00:03:00.895 CXX test/cpp_headers/vmd.o 00:03:00.895 CXX test/cpp_headers/xor.o 00:03:00.895 CXX test/cpp_headers/zipf.o 00:03:00.895 LINK spdk_dd 00:03:00.895 LINK histogram_perf 00:03:00.895 LINK bdev_svc 00:03:00.895 LINK ioat_perf 00:03:00.895 LINK env_dpdk_post_init 00:03:00.895 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:00.895 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:00.895 LINK verify 00:03:01.153 LINK spdk_trace 00:03:01.154 LINK pci_ut 00:03:01.154 LINK test_dma 00:03:01.154 LINK spdk_bdev 00:03:01.154 LINK spdk_nvme 00:03:01.412 LINK spdk_nvme_perf 00:03:01.412 LINK vhost_fuzz 00:03:01.412 LINK nvme_fuzz 00:03:01.412 CC test/event/event_perf/event_perf.o 00:03:01.412 CC test/event/reactor_perf/reactor_perf.o 00:03:01.412 CC test/event/reactor/reactor.o 00:03:01.412 LINK spdk_nvme_identify 00:03:01.412 CC test/event/app_repeat/app_repeat.o 00:03:01.412 CC examples/vmd/led/led.o 00:03:01.412 CC examples/vmd/lsvmd/lsvmd.o 00:03:01.412 CC examples/idxd/perf/perf.o 00:03:01.412 CC test/event/scheduler/scheduler.o 00:03:01.412 CC examples/sock/hello_world/hello_sock.o 00:03:01.412 CC app/vhost/vhost.o 00:03:01.412 LINK spdk_top 00:03:01.412 CC examples/thread/thread/thread_ex.o 00:03:01.412 LINK mem_callbacks 00:03:01.412 LINK event_perf 00:03:01.412 LINK reactor_perf 00:03:01.412 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:01.412 CC test/nvme/reset/reset.o 00:03:01.669 LINK reactor 00:03:01.669 CC test/nvme/connect_stress/connect_stress.o 00:03:01.669 CC test/nvme/aer/aer.o 00:03:01.669 LINK app_repeat 00:03:01.669 CC test/nvme/e2edp/nvme_dp.o 00:03:01.669 CC test/nvme/overhead/overhead.o 00:03:01.669 CC test/nvme/fused_ordering/fused_ordering.o 00:03:01.669 CC test/nvme/reserve/reserve.o 00:03:01.669 CC test/nvme/startup/startup.o 00:03:01.669 CC test/nvme/boot_partition/boot_partition.o 00:03:01.669 CC test/nvme/cuse/cuse.o 00:03:01.669 CC test/nvme/simple_copy/simple_copy.o 00:03:01.669 CC test/nvme/compliance/nvme_compliance.o 00:03:01.669 LINK led 00:03:01.669 CC test/nvme/err_injection/err_injection.o 00:03:01.669 CC test/nvme/sgl/sgl.o 00:03:01.669 CC test/nvme/fdp/fdp.o 00:03:01.669 CC test/blobfs/mkfs/mkfs.o 00:03:01.669 CC test/accel/dif/dif.o 00:03:01.669 LINK lsvmd 00:03:01.669 CC test/lvol/esnap/esnap.o 00:03:01.669 LINK hello_sock 00:03:01.669 LINK memory_ut 00:03:01.669 LINK vhost 00:03:01.669 LINK thread 00:03:01.669 LINK scheduler 00:03:01.669 LINK connect_stress 00:03:01.669 LINK idxd_perf 00:03:01.669 LINK doorbell_aers 00:03:01.669 LINK boot_partition 00:03:01.669 LINK startup 00:03:01.669 LINK fused_ordering 00:03:01.669 LINK reserve 00:03:01.669 LINK err_injection 00:03:01.670 LINK reset 00:03:01.670 LINK simple_copy 00:03:01.670 LINK mkfs 00:03:01.670 LINK aer 00:03:01.927 LINK sgl 00:03:01.927 LINK nvme_dp 00:03:01.927 LINK overhead 00:03:01.927 LINK nvme_compliance 00:03:01.927 LINK fdp 00:03:01.927 LINK dif 00:03:01.927 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:01.927 CC examples/nvme/reconnect/reconnect.o 00:03:02.185 CC examples/nvme/hello_world/hello_world.o 00:03:02.185 CC examples/nvme/hotplug/hotplug.o 00:03:02.185 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:02.185 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:02.185 CC examples/nvme/arbitration/arbitration.o 00:03:02.185 CC examples/nvme/abort/abort.o 00:03:02.185 CC examples/accel/perf/accel_perf.o 00:03:02.185 LINK iscsi_fuzz 00:03:02.185 CC examples/blob/hello_world/hello_blob.o 00:03:02.185 CC examples/blob/cli/blobcli.o 00:03:02.185 LINK cmb_copy 00:03:02.185 LINK pmr_persistence 00:03:02.185 LINK hello_world 00:03:02.185 LINK hotplug 00:03:02.444 LINK arbitration 00:03:02.444 LINK reconnect 00:03:02.444 LINK abort 00:03:02.444 LINK nvme_manage 00:03:02.444 CC test/bdev/bdevio/bdevio.o 00:03:02.444 LINK hello_blob 00:03:02.444 LINK accel_perf 00:03:02.444 LINK cuse 00:03:02.703 LINK blobcli 00:03:02.703 LINK bdevio 00:03:02.961 CC examples/bdev/hello_world/hello_bdev.o 00:03:02.961 CC examples/bdev/bdevperf/bdevperf.o 00:03:03.219 LINK hello_bdev 00:03:03.478 LINK bdevperf 00:03:04.044 CC examples/nvmf/nvmf/nvmf.o 00:03:04.302 LINK nvmf 00:03:05.238 LINK esnap 00:03:05.497 00:03:05.497 real 0m43.367s 00:03:05.497 user 6m30.551s 00:03:05.497 sys 3m24.340s 00:03:05.497 14:07:57 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:05.497 14:07:57 make -- common/autotest_common.sh@10 -- $ set +x 00:03:05.497 ************************************ 00:03:05.497 END TEST make 00:03:05.497 ************************************ 00:03:05.497 14:07:57 -- common/autotest_common.sh@1142 -- $ return 0 00:03:05.497 14:07:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:05.497 14:07:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:05.497 14:07:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:05.497 14:07:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.497 14:07:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:05.497 14:07:57 -- pm/common@44 -- $ pid=2256980 00:03:05.497 14:07:57 -- pm/common@50 -- $ kill -TERM 2256980 00:03:05.497 14:07:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.497 14:07:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:05.497 14:07:57 -- pm/common@44 -- $ pid=2256981 00:03:05.497 14:07:57 -- pm/common@50 -- $ kill -TERM 2256981 00:03:05.497 14:07:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.497 14:07:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:05.497 14:07:57 -- pm/common@44 -- $ pid=2256983 00:03:05.497 14:07:57 -- pm/common@50 -- $ kill -TERM 2256983 00:03:05.497 14:07:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.497 14:07:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:05.497 14:07:57 -- pm/common@44 -- $ pid=2257001 00:03:05.497 14:07:57 -- pm/common@50 -- $ sudo -E kill -TERM 2257001 00:03:05.497 14:07:57 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:05.497 14:07:57 -- nvmf/common.sh@7 -- # uname -s 00:03:05.497 14:07:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:05.497 14:07:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:05.497 14:07:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:05.497 14:07:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:05.497 14:07:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:05.497 14:07:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:05.497 14:07:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:05.497 14:07:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:05.497 14:07:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:05.497 14:07:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:05.497 14:07:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:05.497 14:07:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:05.497 14:07:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:05.497 14:07:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:05.497 14:07:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:05.497 14:07:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:05.497 14:07:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:05.497 14:07:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:05.497 14:07:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:05.497 14:07:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:05.497 14:07:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.497 14:07:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.497 14:07:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.497 14:07:57 -- paths/export.sh@5 -- # export PATH 00:03:05.497 14:07:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.497 14:07:57 -- nvmf/common.sh@47 -- # : 0 00:03:05.497 14:07:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:05.497 14:07:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:05.497 14:07:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:05.497 14:07:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:05.497 14:07:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:05.497 14:07:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:05.497 14:07:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:05.497 14:07:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:05.497 14:07:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:05.497 14:07:57 -- spdk/autotest.sh@32 -- # uname -s 00:03:05.497 14:07:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:05.497 14:07:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:05.497 14:07:57 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:05.497 14:07:57 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:05.497 14:07:57 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:05.497 14:07:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:05.497 14:07:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:05.497 14:07:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:05.497 14:07:57 -- spdk/autotest.sh@48 -- # udevadm_pid=2315779 00:03:05.497 14:07:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:05.497 14:07:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:05.497 14:07:57 -- pm/common@17 -- # local monitor 00:03:05.497 14:07:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.497 14:07:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.497 14:07:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.497 14:07:57 -- pm/common@21 -- # date +%s 00:03:05.497 14:07:57 -- pm/common@21 -- # date +%s 00:03:05.497 14:07:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.497 14:07:57 -- pm/common@21 -- # date +%s 00:03:05.497 14:07:57 -- pm/common@25 -- # sleep 1 00:03:05.497 14:07:57 -- pm/common@21 -- # date +%s 00:03:05.497 14:07:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720786077 00:03:05.497 14:07:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720786077 00:03:05.497 14:07:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720786077 00:03:05.497 14:07:57 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720786077 00:03:05.497 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720786077_collect-vmstat.pm.log 00:03:05.497 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720786077_collect-cpu-load.pm.log 00:03:05.497 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720786077_collect-cpu-temp.pm.log 00:03:05.755 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720786077_collect-bmc-pm.bmc.pm.log 00:03:06.689 14:07:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:06.689 14:07:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:06.689 14:07:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:06.689 14:07:58 -- common/autotest_common.sh@10 -- # set +x 00:03:06.689 14:07:58 -- spdk/autotest.sh@59 -- # create_test_list 00:03:06.689 14:07:58 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:06.689 14:07:58 -- common/autotest_common.sh@10 -- # set +x 00:03:06.689 14:07:58 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:06.689 14:07:58 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:06.689 14:07:58 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:06.689 14:07:58 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:06.689 14:07:58 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:06.689 14:07:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:06.689 14:07:58 -- common/autotest_common.sh@1455 -- # uname 00:03:06.689 14:07:58 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:06.689 14:07:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:06.689 14:07:58 -- common/autotest_common.sh@1475 -- # uname 00:03:06.689 14:07:58 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:06.689 14:07:58 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:06.689 14:07:58 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:06.689 14:07:58 -- spdk/autotest.sh@72 -- # hash lcov 00:03:06.689 14:07:58 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:06.689 14:07:58 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:06.689 --rc lcov_branch_coverage=1 00:03:06.689 --rc lcov_function_coverage=1 00:03:06.689 --rc genhtml_branch_coverage=1 00:03:06.689 --rc genhtml_function_coverage=1 00:03:06.689 --rc genhtml_legend=1 00:03:06.689 --rc geninfo_all_blocks=1 00:03:06.689 ' 00:03:06.689 14:07:58 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:06.689 --rc lcov_branch_coverage=1 00:03:06.689 --rc lcov_function_coverage=1 00:03:06.689 --rc genhtml_branch_coverage=1 00:03:06.689 --rc genhtml_function_coverage=1 00:03:06.689 --rc genhtml_legend=1 00:03:06.689 --rc geninfo_all_blocks=1 00:03:06.689 ' 00:03:06.689 14:07:58 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:06.689 --rc lcov_branch_coverage=1 00:03:06.689 --rc lcov_function_coverage=1 00:03:06.689 --rc genhtml_branch_coverage=1 00:03:06.689 --rc genhtml_function_coverage=1 00:03:06.689 --rc genhtml_legend=1 00:03:06.689 --rc geninfo_all_blocks=1 00:03:06.689 --no-external' 00:03:06.689 14:07:58 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:06.689 --rc lcov_branch_coverage=1 00:03:06.689 --rc lcov_function_coverage=1 00:03:06.689 --rc genhtml_branch_coverage=1 00:03:06.689 --rc genhtml_function_coverage=1 00:03:06.689 --rc genhtml_legend=1 00:03:06.689 --rc geninfo_all_blocks=1 00:03:06.689 --no-external' 00:03:06.689 14:07:58 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:06.689 lcov: LCOV version 1.14 00:03:06.689 14:07:58 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:10.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:10.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:10.877 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:11.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:11.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:11.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:11.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:11.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:11.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:11.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:11.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:11.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:11.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:26.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:26.009 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:31.277 14:08:23 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:31.277 14:08:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:31.277 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:03:31.277 14:08:23 -- spdk/autotest.sh@91 -- # rm -f 00:03:31.277 14:08:23 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.805 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:33.805 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:33.805 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:33.805 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:34.062 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:34.062 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:34.062 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:34.062 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:34.062 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:34.062 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:34.062 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:34.062 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:34.062 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:34.062 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:34.062 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:34.062 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:34.321 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:34.321 14:08:26 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:34.321 14:08:26 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:34.321 14:08:26 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:34.321 14:08:26 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:34.321 14:08:26 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:34.321 14:08:26 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:34.321 14:08:26 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:34.321 14:08:26 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:34.321 14:08:26 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:34.321 14:08:26 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:34.321 14:08:26 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:34.321 14:08:26 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:34.321 14:08:26 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:34.321 14:08:26 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:34.321 14:08:26 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:34.321 No valid GPT data, bailing 00:03:34.321 14:08:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:34.321 14:08:26 -- scripts/common.sh@391 -- # pt= 00:03:34.321 14:08:26 -- scripts/common.sh@392 -- # return 1 00:03:34.321 14:08:26 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:34.321 1+0 records in 00:03:34.321 1+0 records out 00:03:34.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454233 s, 231 MB/s 00:03:34.321 14:08:26 -- spdk/autotest.sh@118 -- # sync 00:03:34.321 14:08:26 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:34.321 14:08:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:34.321 14:08:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:39.584 14:08:30 -- spdk/autotest.sh@124 -- # uname -s 00:03:39.584 14:08:30 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:39.584 14:08:30 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:39.584 14:08:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.584 14:08:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.584 14:08:30 -- common/autotest_common.sh@10 -- # set +x 00:03:39.584 ************************************ 00:03:39.584 START TEST setup.sh 00:03:39.584 ************************************ 00:03:39.584 14:08:30 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:39.584 * Looking for test storage... 00:03:39.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:39.584 14:08:30 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:39.584 14:08:30 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:39.584 14:08:30 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:39.584 14:08:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.584 14:08:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.584 14:08:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:39.584 ************************************ 00:03:39.584 START TEST acl 00:03:39.584 ************************************ 00:03:39.584 14:08:31 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:39.584 * Looking for test storage... 00:03:39.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:39.584 14:08:31 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:39.584 14:08:31 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:39.584 14:08:31 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:39.584 14:08:31 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:39.584 14:08:31 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.584 14:08:31 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:39.584 14:08:31 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:39.584 14:08:31 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.584 14:08:31 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.584 14:08:31 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:39.584 14:08:31 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:39.584 14:08:31 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:39.584 14:08:31 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:39.584 14:08:31 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:39.584 14:08:31 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.584 14:08:31 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.114 14:08:33 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:42.114 14:08:33 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:42.114 14:08:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.114 14:08:33 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:42.114 14:08:33 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.114 14:08:33 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:44.646 Hugepages 00:03:44.646 node hugesize free / total 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 00:03:44.646 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:44.646 14:08:36 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:44.646 14:08:36 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.646 14:08:36 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.646 14:08:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:44.646 ************************************ 00:03:44.646 START TEST denied 00:03:44.646 ************************************ 00:03:44.646 14:08:36 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:44.646 14:08:36 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:44.646 14:08:36 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:44.646 14:08:36 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:44.646 14:08:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.646 14:08:36 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:47.939 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:47.939 14:08:39 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:47.939 14:08:39 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:47.939 14:08:39 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:47.939 14:08:39 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:47.939 14:08:39 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:47.939 14:08:39 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:47.939 14:08:39 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:47.940 14:08:39 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:47.940 14:08:39 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.940 14:08:39 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.231 00:03:51.231 real 0m6.764s 00:03:51.231 user 0m2.231s 00:03:51.231 sys 0m3.846s 00:03:51.231 14:08:43 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.231 14:08:43 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:51.231 ************************************ 00:03:51.231 END TEST denied 00:03:51.231 ************************************ 00:03:51.231 14:08:43 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:51.231 14:08:43 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:51.231 14:08:43 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.231 14:08:43 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.231 14:08:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:51.231 ************************************ 00:03:51.231 START TEST allowed 00:03:51.231 ************************************ 00:03:51.231 14:08:43 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:51.231 14:08:43 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:51.231 14:08:43 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:51.231 14:08:43 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:51.231 14:08:43 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.231 14:08:43 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:55.488 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:55.488 14:08:46 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:55.488 14:08:46 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:55.488 14:08:46 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:55.488 14:08:46 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.488 14:08:46 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.026 00:03:58.026 real 0m6.577s 00:03:58.026 user 0m1.994s 00:03:58.026 sys 0m3.703s 00:03:58.026 14:08:49 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.026 14:08:49 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:58.026 ************************************ 00:03:58.026 END TEST allowed 00:03:58.026 ************************************ 00:03:58.026 14:08:49 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:58.026 00:03:58.026 real 0m18.732s 00:03:58.026 user 0m6.063s 00:03:58.026 sys 0m11.081s 00:03:58.026 14:08:49 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.026 14:08:49 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.026 ************************************ 00:03:58.026 END TEST acl 00:03:58.026 ************************************ 00:03:58.026 14:08:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:58.026 14:08:49 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:58.026 14:08:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.026 14:08:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.026 14:08:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:58.026 ************************************ 00:03:58.026 START TEST hugepages 00:03:58.026 ************************************ 00:03:58.026 14:08:49 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:58.026 * Looking for test storage... 00:03:58.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 173902348 kB' 'MemAvailable: 176752080 kB' 'Buffers: 3896 kB' 'Cached: 9679448 kB' 'SwapCached: 0 kB' 'Active: 6659432 kB' 'Inactive: 3493732 kB' 'Active(anon): 6267732 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473264 kB' 'Mapped: 165072 kB' 'Shmem: 5797912 kB' 'KReclaimable: 217032 kB' 'Slab: 756680 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 539648 kB' 'KernelStack: 20304 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 7738996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314776 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.026 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.027 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:58.028 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:58.029 14:08:49 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:58.029 14:08:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.029 14:08:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.029 14:08:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.029 ************************************ 00:03:58.029 START TEST default_setup 00:03:58.029 ************************************ 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.029 14:08:49 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.320 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:01.320 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:01.320 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:01.320 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:01.320 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:01.320 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:01.320 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:01.320 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:01.320 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:01.320 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:01.320 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:01.320 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:01.320 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:01.320 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:01.320 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:01.320 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:01.895 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:01.895 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:01.895 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:01.895 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.895 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.895 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:01.895 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:01.895 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:01.895 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.895 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.895 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.895 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:01.895 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:01.895 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.895 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.895 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176113312 kB' 'MemAvailable: 178963044 kB' 'Buffers: 3896 kB' 'Cached: 9679552 kB' 'SwapCached: 0 kB' 'Active: 6674744 kB' 'Inactive: 3493732 kB' 'Active(anon): 6283044 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488400 kB' 'Mapped: 164524 kB' 'Shmem: 5798016 kB' 'KReclaimable: 217032 kB' 'Slab: 754160 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537128 kB' 'KernelStack: 20352 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7756388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314936 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.896 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.897 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176118240 kB' 'MemAvailable: 178967972 kB' 'Buffers: 3896 kB' 'Cached: 9679556 kB' 'SwapCached: 0 kB' 'Active: 6673924 kB' 'Inactive: 3493732 kB' 'Active(anon): 6282224 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487572 kB' 'Mapped: 164492 kB' 'Shmem: 5798020 kB' 'KReclaimable: 217032 kB' 'Slab: 754096 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537064 kB' 'KernelStack: 20304 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7756408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.898 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.899 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.900 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176118648 kB' 'MemAvailable: 178968380 kB' 'Buffers: 3896 kB' 'Cached: 9679572 kB' 'SwapCached: 0 kB' 'Active: 6674424 kB' 'Inactive: 3493732 kB' 'Active(anon): 6282724 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488044 kB' 'Mapped: 164492 kB' 'Shmem: 5798036 kB' 'KReclaimable: 217032 kB' 'Slab: 754196 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537164 kB' 'KernelStack: 20384 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7756428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.901 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.902 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:01.903 nr_hugepages=1024 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.903 resv_hugepages=0 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.903 surplus_hugepages=0 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.903 anon_hugepages=0 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.903 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176117640 kB' 'MemAvailable: 178967372 kB' 'Buffers: 3896 kB' 'Cached: 9679612 kB' 'SwapCached: 0 kB' 'Active: 6674108 kB' 'Inactive: 3493732 kB' 'Active(anon): 6282408 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487652 kB' 'Mapped: 164492 kB' 'Shmem: 5798076 kB' 'KReclaimable: 217032 kB' 'Slab: 754200 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537168 kB' 'KernelStack: 20368 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7756452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.904 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.905 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92333300 kB' 'MemUsed: 5282328 kB' 'SwapCached: 0 kB' 'Active: 1506120 kB' 'Inactive: 235916 kB' 'Active(anon): 1323320 kB' 'Inactive(anon): 0 kB' 'Active(file): 182800 kB' 'Inactive(file): 235916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1503688 kB' 'Mapped: 53644 kB' 'AnonPages: 241572 kB' 'Shmem: 1084972 kB' 'KernelStack: 11912 kB' 'PageTables: 5128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83128 kB' 'Slab: 330268 kB' 'SReclaimable: 83128 kB' 'SUnreclaim: 247140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.906 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:01.907 node0=1024 expecting 1024 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:01.907 00:04:01.907 real 0m3.881s 00:04:01.907 user 0m1.237s 00:04:01.907 sys 0m1.901s 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.907 14:08:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:01.907 ************************************ 00:04:01.907 END TEST default_setup 00:04:01.907 ************************************ 00:04:01.907 14:08:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:01.907 14:08:53 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:01.907 14:08:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.907 14:08:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.907 14:08:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.167 ************************************ 00:04:02.167 START TEST per_node_1G_alloc 00:04:02.167 ************************************ 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.167 14:08:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.740 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:04.740 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:04.740 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:04.740 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:04.740 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:04.740 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:04.740 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:04.740 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:04.740 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:04.740 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:04.740 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:04.740 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:04.740 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:04.740 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:04.740 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:04.740 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:04.740 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.740 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176100444 kB' 'MemAvailable: 178950176 kB' 'Buffers: 3896 kB' 'Cached: 9679688 kB' 'SwapCached: 0 kB' 'Active: 6675268 kB' 'Inactive: 3493732 kB' 'Active(anon): 6283568 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488664 kB' 'Mapped: 164528 kB' 'Shmem: 5798152 kB' 'KReclaimable: 217032 kB' 'Slab: 754584 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537552 kB' 'KernelStack: 20400 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7757088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314952 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.741 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176103596 kB' 'MemAvailable: 178953328 kB' 'Buffers: 3896 kB' 'Cached: 9679692 kB' 'SwapCached: 0 kB' 'Active: 6675004 kB' 'Inactive: 3493732 kB' 'Active(anon): 6283304 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488368 kB' 'Mapped: 164500 kB' 'Shmem: 5798156 kB' 'KReclaimable: 217032 kB' 'Slab: 754696 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537664 kB' 'KernelStack: 20384 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7757108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314920 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.742 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.743 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176102840 kB' 'MemAvailable: 178952572 kB' 'Buffers: 3896 kB' 'Cached: 9679708 kB' 'SwapCached: 0 kB' 'Active: 6675000 kB' 'Inactive: 3493732 kB' 'Active(anon): 6283300 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488384 kB' 'Mapped: 164500 kB' 'Shmem: 5798172 kB' 'KReclaimable: 217032 kB' 'Slab: 754696 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537664 kB' 'KernelStack: 20384 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7757132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.744 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.745 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.746 nr_hugepages=1024 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.746 resv_hugepages=0 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.746 surplus_hugepages=0 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.746 anon_hugepages=0 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176102084 kB' 'MemAvailable: 178951816 kB' 'Buffers: 3896 kB' 'Cached: 9679732 kB' 'SwapCached: 0 kB' 'Active: 6675016 kB' 'Inactive: 3493732 kB' 'Active(anon): 6283316 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488380 kB' 'Mapped: 164500 kB' 'Shmem: 5798196 kB' 'KReclaimable: 217032 kB' 'Slab: 754696 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537664 kB' 'KernelStack: 20384 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7757152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.746 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.747 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 93357136 kB' 'MemUsed: 4258492 kB' 'SwapCached: 0 kB' 'Active: 1506388 kB' 'Inactive: 235916 kB' 'Active(anon): 1323588 kB' 'Inactive(anon): 0 kB' 'Active(file): 182800 kB' 'Inactive(file): 235916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1503788 kB' 'Mapped: 53648 kB' 'AnonPages: 241716 kB' 'Shmem: 1085072 kB' 'KernelStack: 11896 kB' 'PageTables: 5080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83128 kB' 'Slab: 330812 kB' 'SReclaimable: 83128 kB' 'SUnreclaim: 247684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.748 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.749 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765528 kB' 'MemFree: 82744948 kB' 'MemUsed: 11020580 kB' 'SwapCached: 0 kB' 'Active: 5168660 kB' 'Inactive: 3257816 kB' 'Active(anon): 4959760 kB' 'Inactive(anon): 0 kB' 'Active(file): 208900 kB' 'Inactive(file): 3257816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8179864 kB' 'Mapped: 110852 kB' 'AnonPages: 246652 kB' 'Shmem: 4713148 kB' 'KernelStack: 8472 kB' 'PageTables: 3288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133904 kB' 'Slab: 423884 kB' 'SReclaimable: 133904 kB' 'SUnreclaim: 289980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.010 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:05.011 node0=512 expecting 512 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:05.011 node1=512 expecting 512 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:05.011 00:04:05.011 real 0m2.844s 00:04:05.011 user 0m1.185s 00:04:05.011 sys 0m1.718s 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.011 14:08:56 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:05.011 ************************************ 00:04:05.011 END TEST per_node_1G_alloc 00:04:05.011 ************************************ 00:04:05.011 14:08:56 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:05.011 14:08:56 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:05.011 14:08:56 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.011 14:08:56 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.011 14:08:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.011 ************************************ 00:04:05.011 START TEST even_2G_alloc 00:04:05.011 ************************************ 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.011 14:08:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.547 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:07.547 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:07.547 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:07.547 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:07.547 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:07.547 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:07.547 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:07.547 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:07.547 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.547 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:07.547 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:07.547 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:07.547 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:07.547 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:07.547 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:07.547 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:07.547 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.813 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176118096 kB' 'MemAvailable: 178967828 kB' 'Buffers: 3896 kB' 'Cached: 9679848 kB' 'SwapCached: 0 kB' 'Active: 6675472 kB' 'Inactive: 3493732 kB' 'Active(anon): 6283772 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488808 kB' 'Mapped: 164024 kB' 'Shmem: 5798312 kB' 'KReclaimable: 217032 kB' 'Slab: 755096 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 538064 kB' 'KernelStack: 20976 kB' 'PageTables: 9752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7750456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315016 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176123160 kB' 'MemAvailable: 178972892 kB' 'Buffers: 3896 kB' 'Cached: 9679852 kB' 'SwapCached: 0 kB' 'Active: 6675748 kB' 'Inactive: 3493732 kB' 'Active(anon): 6284048 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488700 kB' 'Mapped: 164088 kB' 'Shmem: 5798316 kB' 'KReclaimable: 217032 kB' 'Slab: 754908 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537876 kB' 'KernelStack: 20928 kB' 'PageTables: 9544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7750472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314936 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176124008 kB' 'MemAvailable: 178973740 kB' 'Buffers: 3896 kB' 'Cached: 9679868 kB' 'SwapCached: 0 kB' 'Active: 6675624 kB' 'Inactive: 3493732 kB' 'Active(anon): 6283924 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488776 kB' 'Mapped: 163568 kB' 'Shmem: 5798332 kB' 'KReclaimable: 217032 kB' 'Slab: 754768 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537736 kB' 'KernelStack: 21088 kB' 'PageTables: 10104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7750492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315000 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.819 nr_hugepages=1024 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.819 resv_hugepages=0 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.819 surplus_hugepages=0 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.819 anon_hugepages=0 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176131820 kB' 'MemAvailable: 178981552 kB' 'Buffers: 3896 kB' 'Cached: 9679888 kB' 'SwapCached: 0 kB' 'Active: 6676320 kB' 'Inactive: 3493732 kB' 'Active(anon): 6284620 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489968 kB' 'Mapped: 163576 kB' 'Shmem: 5798352 kB' 'KReclaimable: 217032 kB' 'Slab: 754768 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537736 kB' 'KernelStack: 21296 kB' 'PageTables: 10980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7750516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315064 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.820 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 93384556 kB' 'MemUsed: 4231072 kB' 'SwapCached: 0 kB' 'Active: 1509508 kB' 'Inactive: 235916 kB' 'Active(anon): 1326708 kB' 'Inactive(anon): 0 kB' 'Active(file): 182800 kB' 'Inactive(file): 235916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1503864 kB' 'Mapped: 53204 kB' 'AnonPages: 244656 kB' 'Shmem: 1085148 kB' 'KernelStack: 12856 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83128 kB' 'Slab: 331148 kB' 'SReclaimable: 83128 kB' 'SUnreclaim: 248020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765528 kB' 'MemFree: 82747536 kB' 'MemUsed: 11017992 kB' 'SwapCached: 0 kB' 'Active: 5167772 kB' 'Inactive: 3257816 kB' 'Active(anon): 4958872 kB' 'Inactive(anon): 0 kB' 'Active(file): 208900 kB' 'Inactive(file): 3257816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8179960 kB' 'Mapped: 110364 kB' 'AnonPages: 245736 kB' 'Shmem: 4713244 kB' 'KernelStack: 8408 kB' 'PageTables: 3104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133904 kB' 'Slab: 423620 kB' 'SReclaimable: 133904 kB' 'SUnreclaim: 289716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:07.823 node0=512 expecting 512 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:07.823 node1=512 expecting 512 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:07.823 00:04:07.823 real 0m2.965s 00:04:07.823 user 0m1.255s 00:04:07.823 sys 0m1.777s 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.823 14:08:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.823 ************************************ 00:04:07.823 END TEST even_2G_alloc 00:04:07.823 ************************************ 00:04:08.083 14:08:59 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:08.083 14:08:59 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:08.083 14:08:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.083 14:08:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.083 14:08:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.083 ************************************ 00:04:08.083 START TEST odd_alloc 00:04:08.084 ************************************ 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.084 14:08:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.617 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:10.617 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:10.617 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:10.617 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:10.617 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:10.617 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:10.617 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:10.617 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:10.617 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:10.617 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:10.617 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:10.617 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:10.617 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:10.617 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:10.617 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:10.617 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:10.617 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176109948 kB' 'MemAvailable: 178959680 kB' 'Buffers: 3896 kB' 'Cached: 9680000 kB' 'SwapCached: 0 kB' 'Active: 6682460 kB' 'Inactive: 3493732 kB' 'Active(anon): 6290760 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495784 kB' 'Mapped: 164588 kB' 'Shmem: 5798464 kB' 'KReclaimable: 217032 kB' 'Slab: 754004 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 536972 kB' 'KernelStack: 20464 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 7757836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314876 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.878 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.879 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176110900 kB' 'MemAvailable: 178960632 kB' 'Buffers: 3896 kB' 'Cached: 9680004 kB' 'SwapCached: 0 kB' 'Active: 6681704 kB' 'Inactive: 3493732 kB' 'Active(anon): 6290004 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494996 kB' 'Mapped: 164516 kB' 'Shmem: 5798468 kB' 'KReclaimable: 217032 kB' 'Slab: 754052 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537020 kB' 'KernelStack: 20416 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 7757856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314828 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.880 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176111152 kB' 'MemAvailable: 178960884 kB' 'Buffers: 3896 kB' 'Cached: 9680020 kB' 'SwapCached: 0 kB' 'Active: 6682120 kB' 'Inactive: 3493732 kB' 'Active(anon): 6290420 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495396 kB' 'Mapped: 164516 kB' 'Shmem: 5798484 kB' 'KReclaimable: 217032 kB' 'Slab: 754044 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537012 kB' 'KernelStack: 20400 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 7759364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314796 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.881 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.882 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:10.883 nr_hugepages=1025 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.883 resv_hugepages=0 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.883 surplus_hugepages=0 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.883 anon_hugepages=0 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176103092 kB' 'MemAvailable: 178952824 kB' 'Buffers: 3896 kB' 'Cached: 9680044 kB' 'SwapCached: 0 kB' 'Active: 6687124 kB' 'Inactive: 3493732 kB' 'Active(anon): 6295424 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500912 kB' 'Mapped: 164516 kB' 'Shmem: 5798508 kB' 'KReclaimable: 217032 kB' 'Slab: 754044 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537012 kB' 'KernelStack: 20432 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 7764016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314800 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.883 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.884 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 93363068 kB' 'MemUsed: 4252560 kB' 'SwapCached: 0 kB' 'Active: 1517496 kB' 'Inactive: 235916 kB' 'Active(anon): 1334696 kB' 'Inactive(anon): 0 kB' 'Active(file): 182800 kB' 'Inactive(file): 235916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1503884 kB' 'Mapped: 53388 kB' 'AnonPages: 252852 kB' 'Shmem: 1085168 kB' 'KernelStack: 11928 kB' 'PageTables: 5128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83128 kB' 'Slab: 330636 kB' 'SReclaimable: 83128 kB' 'SUnreclaim: 247508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.885 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765528 kB' 'MemFree: 82740804 kB' 'MemUsed: 11024724 kB' 'SwapCached: 0 kB' 'Active: 5169620 kB' 'Inactive: 3257816 kB' 'Active(anon): 4960720 kB' 'Inactive(anon): 0 kB' 'Active(file): 208900 kB' 'Inactive(file): 3257816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8180096 kB' 'Mapped: 111164 kB' 'AnonPages: 247560 kB' 'Shmem: 4713380 kB' 'KernelStack: 8488 kB' 'PageTables: 3376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133904 kB' 'Slab: 423408 kB' 'SReclaimable: 133904 kB' 'SUnreclaim: 289504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.886 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.887 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:10.888 node0=512 expecting 513 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:10.888 node1=513 expecting 512 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:10.888 00:04:10.888 real 0m2.968s 00:04:10.888 user 0m1.173s 00:04:10.888 sys 0m1.860s 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.888 14:09:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:10.888 ************************************ 00:04:10.888 END TEST odd_alloc 00:04:10.888 ************************************ 00:04:10.888 14:09:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:10.888 14:09:02 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:10.888 14:09:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.888 14:09:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.888 14:09:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.147 ************************************ 00:04:11.147 START TEST custom_alloc 00:04:11.147 ************************************ 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:11.147 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.148 14:09:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.682 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:13.682 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:13.682 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:13.682 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.682 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.682 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.682 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.682 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.682 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.682 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:13.682 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:13.682 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.682 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.682 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.682 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.682 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.682 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175068708 kB' 'MemAvailable: 177918440 kB' 'Buffers: 3896 kB' 'Cached: 9680148 kB' 'SwapCached: 0 kB' 'Active: 6682452 kB' 'Inactive: 3493732 kB' 'Active(anon): 6290752 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495388 kB' 'Mapped: 164584 kB' 'Shmem: 5798612 kB' 'KReclaimable: 217032 kB' 'Slab: 754108 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537076 kB' 'KernelStack: 20416 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 7758440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314892 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.682 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.683 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175068916 kB' 'MemAvailable: 177918648 kB' 'Buffers: 3896 kB' 'Cached: 9680152 kB' 'SwapCached: 0 kB' 'Active: 6682220 kB' 'Inactive: 3493732 kB' 'Active(anon): 6290520 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495276 kB' 'Mapped: 164548 kB' 'Shmem: 5798616 kB' 'KReclaimable: 217032 kB' 'Slab: 754156 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537124 kB' 'KernelStack: 20464 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 7758460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314908 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.684 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.948 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175069796 kB' 'MemAvailable: 177919528 kB' 'Buffers: 3896 kB' 'Cached: 9680152 kB' 'SwapCached: 0 kB' 'Active: 6681472 kB' 'Inactive: 3493732 kB' 'Active(anon): 6289772 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494516 kB' 'Mapped: 164548 kB' 'Shmem: 5798616 kB' 'KReclaimable: 217032 kB' 'Slab: 754144 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537112 kB' 'KernelStack: 20368 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 7758480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314860 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.949 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.950 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:13.951 nr_hugepages=1536 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.951 resv_hugepages=0 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.951 surplus_hugepages=0 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.951 anon_hugepages=0 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175069544 kB' 'MemAvailable: 177919276 kB' 'Buffers: 3896 kB' 'Cached: 9680208 kB' 'SwapCached: 0 kB' 'Active: 6681300 kB' 'Inactive: 3493732 kB' 'Active(anon): 6289600 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494188 kB' 'Mapped: 164548 kB' 'Shmem: 5798672 kB' 'KReclaimable: 217032 kB' 'Slab: 754144 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 537112 kB' 'KernelStack: 20384 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 7758500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314860 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.951 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.952 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 93384016 kB' 'MemUsed: 4231612 kB' 'SwapCached: 0 kB' 'Active: 1512640 kB' 'Inactive: 235916 kB' 'Active(anon): 1329840 kB' 'Inactive(anon): 0 kB' 'Active(file): 182800 kB' 'Inactive(file): 235916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1503896 kB' 'Mapped: 53384 kB' 'AnonPages: 247884 kB' 'Shmem: 1085180 kB' 'KernelStack: 11944 kB' 'PageTables: 5160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83128 kB' 'Slab: 330600 kB' 'SReclaimable: 83128 kB' 'SUnreclaim: 247472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.953 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765528 kB' 'MemFree: 81685492 kB' 'MemUsed: 12080036 kB' 'SwapCached: 0 kB' 'Active: 5169012 kB' 'Inactive: 3257816 kB' 'Active(anon): 4960112 kB' 'Inactive(anon): 0 kB' 'Active(file): 208900 kB' 'Inactive(file): 3257816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8180232 kB' 'Mapped: 111164 kB' 'AnonPages: 246684 kB' 'Shmem: 4713516 kB' 'KernelStack: 8472 kB' 'PageTables: 3328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133904 kB' 'Slab: 423544 kB' 'SReclaimable: 133904 kB' 'SUnreclaim: 289640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.954 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.955 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.956 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.956 14:09:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.956 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.956 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.956 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.956 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.956 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:13.956 node0=512 expecting 512 00:04:13.956 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.956 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.956 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.956 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:13.956 node1=1024 expecting 1024 00:04:13.956 14:09:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:13.956 00:04:13.956 real 0m2.919s 00:04:13.956 user 0m1.226s 00:04:13.956 sys 0m1.724s 00:04:13.956 14:09:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.956 14:09:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.956 ************************************ 00:04:13.956 END TEST custom_alloc 00:04:13.956 ************************************ 00:04:13.956 14:09:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:13.956 14:09:05 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:13.956 14:09:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.956 14:09:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.956 14:09:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.956 ************************************ 00:04:13.956 START TEST no_shrink_alloc 00:04:13.956 ************************************ 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.956 14:09:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.558 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.558 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:16.558 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.558 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.558 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.558 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.558 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.558 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.558 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.558 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.558 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.558 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.558 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.558 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.558 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.558 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.558 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176131692 kB' 'MemAvailable: 178981424 kB' 'Buffers: 3896 kB' 'Cached: 9680304 kB' 'SwapCached: 0 kB' 'Active: 6676888 kB' 'Inactive: 3493732 kB' 'Active(anon): 6285188 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489332 kB' 'Mapped: 163748 kB' 'Shmem: 5798768 kB' 'KReclaimable: 217032 kB' 'Slab: 755208 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 538176 kB' 'KernelStack: 20512 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7754516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314984 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.558 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176129172 kB' 'MemAvailable: 178978904 kB' 'Buffers: 3896 kB' 'Cached: 9680308 kB' 'SwapCached: 0 kB' 'Active: 6676756 kB' 'Inactive: 3493732 kB' 'Active(anon): 6285056 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489796 kB' 'Mapped: 163652 kB' 'Shmem: 5798772 kB' 'KReclaimable: 217032 kB' 'Slab: 755228 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 538196 kB' 'KernelStack: 20480 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7754532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314984 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.559 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.560 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.561 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176128668 kB' 'MemAvailable: 178978400 kB' 'Buffers: 3896 kB' 'Cached: 9680308 kB' 'SwapCached: 0 kB' 'Active: 6676852 kB' 'Inactive: 3493732 kB' 'Active(anon): 6285152 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489964 kB' 'Mapped: 163652 kB' 'Shmem: 5798772 kB' 'KReclaimable: 217032 kB' 'Slab: 755204 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 538172 kB' 'KernelStack: 20576 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7754556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314968 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.562 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:16.563 nr_hugepages=1024 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.563 resv_hugepages=0 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.563 surplus_hugepages=0 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.563 anon_hugepages=0 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.563 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176129104 kB' 'MemAvailable: 178978836 kB' 'Buffers: 3896 kB' 'Cached: 9680344 kB' 'SwapCached: 0 kB' 'Active: 6676196 kB' 'Inactive: 3493732 kB' 'Active(anon): 6284496 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489188 kB' 'Mapped: 163652 kB' 'Shmem: 5798808 kB' 'KReclaimable: 217032 kB' 'Slab: 755200 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 538168 kB' 'KernelStack: 20528 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7753196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314968 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.564 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92345940 kB' 'MemUsed: 5269688 kB' 'SwapCached: 0 kB' 'Active: 1508340 kB' 'Inactive: 235916 kB' 'Active(anon): 1325540 kB' 'Inactive(anon): 0 kB' 'Active(file): 182800 kB' 'Inactive(file): 235916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1503944 kB' 'Mapped: 53232 kB' 'AnonPages: 243684 kB' 'Shmem: 1085228 kB' 'KernelStack: 12056 kB' 'PageTables: 5056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83128 kB' 'Slab: 331116 kB' 'SReclaimable: 83128 kB' 'SUnreclaim: 247988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.565 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.566 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.827 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:16.828 node0=1024 expecting 1024 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.828 14:09:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:19.370 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.370 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.370 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.370 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.370 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.370 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.370 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.370 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.370 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.370 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.370 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.370 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.370 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.370 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.370 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.370 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.370 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.370 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.370 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176125032 kB' 'MemAvailable: 178974764 kB' 'Buffers: 3896 kB' 'Cached: 9680428 kB' 'SwapCached: 0 kB' 'Active: 6676160 kB' 'Inactive: 3493732 kB' 'Active(anon): 6284460 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488356 kB' 'Mapped: 163904 kB' 'Shmem: 5798892 kB' 'KReclaimable: 217032 kB' 'Slab: 755384 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 538352 kB' 'KernelStack: 20320 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7750328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.371 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176124868 kB' 'MemAvailable: 178974600 kB' 'Buffers: 3896 kB' 'Cached: 9680428 kB' 'SwapCached: 0 kB' 'Active: 6675516 kB' 'Inactive: 3493732 kB' 'Active(anon): 6283816 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488172 kB' 'Mapped: 163632 kB' 'Shmem: 5798892 kB' 'KReclaimable: 217032 kB' 'Slab: 755372 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 538340 kB' 'KernelStack: 20352 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7750344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.372 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.373 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176124868 kB' 'MemAvailable: 178974600 kB' 'Buffers: 3896 kB' 'Cached: 9680428 kB' 'SwapCached: 0 kB' 'Active: 6675568 kB' 'Inactive: 3493732 kB' 'Active(anon): 6283868 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488212 kB' 'Mapped: 163632 kB' 'Shmem: 5798892 kB' 'KReclaimable: 217032 kB' 'Slab: 755372 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 538340 kB' 'KernelStack: 20368 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7750368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.374 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.375 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:19.376 nr_hugepages=1024 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.376 resv_hugepages=0 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.376 surplus_hugepages=0 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.376 anon_hugepages=0 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176125556 kB' 'MemAvailable: 178975288 kB' 'Buffers: 3896 kB' 'Cached: 9680488 kB' 'SwapCached: 0 kB' 'Active: 6675220 kB' 'Inactive: 3493732 kB' 'Active(anon): 6283520 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487784 kB' 'Mapped: 163632 kB' 'Shmem: 5798952 kB' 'KReclaimable: 217032 kB' 'Slab: 755372 kB' 'SReclaimable: 217032 kB' 'SUnreclaim: 538340 kB' 'KernelStack: 20336 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7750388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2681812 kB' 'DirectMap2M: 14823424 kB' 'DirectMap1G: 184549376 kB' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.376 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.377 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92342592 kB' 'MemUsed: 5273036 kB' 'SwapCached: 0 kB' 'Active: 1506588 kB' 'Inactive: 235916 kB' 'Active(anon): 1323788 kB' 'Inactive(anon): 0 kB' 'Active(file): 182800 kB' 'Inactive(file): 235916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1504008 kB' 'Mapped: 53208 kB' 'AnonPages: 241640 kB' 'Shmem: 1085292 kB' 'KernelStack: 11880 kB' 'PageTables: 4896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83128 kB' 'Slab: 330748 kB' 'SReclaimable: 83128 kB' 'SUnreclaim: 247620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.378 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:19.638 node0=1024 expecting 1024 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:19.638 00:04:19.638 real 0m5.492s 00:04:19.638 user 0m2.135s 00:04:19.638 sys 0m3.336s 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.638 14:09:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:19.639 ************************************ 00:04:19.639 END TEST no_shrink_alloc 00:04:19.639 ************************************ 00:04:19.639 14:09:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:19.639 14:09:11 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:19.639 14:09:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:19.639 14:09:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:19.639 14:09:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.639 14:09:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.639 14:09:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.639 14:09:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.639 14:09:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:19.639 14:09:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.639 14:09:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.639 14:09:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.639 14:09:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.639 14:09:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:19.639 14:09:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:19.639 00:04:19.639 real 0m21.610s 00:04:19.639 user 0m8.449s 00:04:19.639 sys 0m12.658s 00:04:19.639 14:09:11 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.639 14:09:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:19.639 ************************************ 00:04:19.639 END TEST hugepages 00:04:19.639 ************************************ 00:04:19.639 14:09:11 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:19.639 14:09:11 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:19.639 14:09:11 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.639 14:09:11 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.639 14:09:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:19.639 ************************************ 00:04:19.639 START TEST driver 00:04:19.639 ************************************ 00:04:19.639 14:09:11 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:19.639 * Looking for test storage... 00:04:19.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:19.639 14:09:11 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:19.639 14:09:11 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.639 14:09:11 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.830 14:09:15 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:23.830 14:09:15 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.830 14:09:15 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.830 14:09:15 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:23.830 ************************************ 00:04:23.830 START TEST guess_driver 00:04:23.830 ************************************ 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:23.831 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:23.831 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:23.831 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:23.831 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:23.831 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:23.831 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:23.831 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:23.831 Looking for driver=vfio-pci 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.831 14:09:15 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.360 14:09:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.360 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.360 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.360 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.360 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.360 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.360 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.360 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.360 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.360 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.360 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.360 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.360 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.926 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.926 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.926 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.186 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:27.186 14:09:18 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:27.186 14:09:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.186 14:09:18 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:31.376 00:04:31.376 real 0m7.457s 00:04:31.376 user 0m2.116s 00:04:31.376 sys 0m3.812s 00:04:31.376 14:09:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.376 14:09:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:31.376 ************************************ 00:04:31.376 END TEST guess_driver 00:04:31.376 ************************************ 00:04:31.376 14:09:22 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:31.376 00:04:31.376 real 0m11.260s 00:04:31.376 user 0m3.124s 00:04:31.376 sys 0m5.797s 00:04:31.377 14:09:22 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.377 14:09:22 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:31.377 ************************************ 00:04:31.377 END TEST driver 00:04:31.377 ************************************ 00:04:31.377 14:09:22 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:31.377 14:09:22 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:31.377 14:09:22 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.377 14:09:22 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.377 14:09:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:31.377 ************************************ 00:04:31.377 START TEST devices 00:04:31.377 ************************************ 00:04:31.377 14:09:22 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:31.377 * Looking for test storage... 00:04:31.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:31.377 14:09:22 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:31.377 14:09:22 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:31.377 14:09:22 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:31.377 14:09:22 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:33.912 14:09:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:33.912 14:09:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:33.912 14:09:25 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:33.912 14:09:25 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:33.912 14:09:25 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:33.912 14:09:25 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:33.912 14:09:25 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:33.912 14:09:25 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:33.912 14:09:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:33.912 14:09:25 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:33.912 No valid GPT data, bailing 00:04:33.912 14:09:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:33.912 14:09:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:33.912 14:09:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:33.912 14:09:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:33.912 14:09:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:33.912 14:09:25 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:33.912 14:09:25 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:33.912 14:09:25 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.912 14:09:25 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.912 14:09:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:33.912 ************************************ 00:04:33.912 START TEST nvme_mount 00:04:33.912 ************************************ 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:33.912 14:09:25 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:34.866 Creating new GPT entries in memory. 00:04:34.866 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:34.866 other utilities. 00:04:34.866 14:09:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:34.866 14:09:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.866 14:09:26 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.866 14:09:26 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.866 14:09:26 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:35.803 Creating new GPT entries in memory. 00:04:35.803 The operation has completed successfully. 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2347753 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.803 14:09:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.338 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.339 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.339 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.339 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.339 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.339 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.339 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.339 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.339 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.339 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.339 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:38.339 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.339 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.339 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.339 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:38.339 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.598 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.598 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.598 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:38.598 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.598 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.598 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.857 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:38.857 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:38.857 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:38.857 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.857 14:09:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.395 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.396 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.396 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.396 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.396 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.396 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.396 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.396 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.396 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.396 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.396 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.396 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.396 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.396 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.396 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.396 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.655 14:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.234 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.494 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.494 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:44.494 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:44.494 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:44.494 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.494 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:44.494 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:44.494 14:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:44.494 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:44.494 00:04:44.494 real 0m10.779s 00:04:44.494 user 0m3.192s 00:04:44.494 sys 0m5.365s 00:04:44.494 14:09:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.494 14:09:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:44.494 ************************************ 00:04:44.494 END TEST nvme_mount 00:04:44.494 ************************************ 00:04:44.494 14:09:36 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:44.494 14:09:36 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:44.494 14:09:36 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.494 14:09:36 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.494 14:09:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:44.494 ************************************ 00:04:44.494 START TEST dm_mount 00:04:44.494 ************************************ 00:04:44.494 14:09:36 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:44.494 14:09:36 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:44.494 14:09:36 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:44.494 14:09:36 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:44.494 14:09:36 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:44.494 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:44.494 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:44.494 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:44.494 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:44.494 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:44.494 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:44.494 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:44.494 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:44.494 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:44.494 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:44.495 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:44.495 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:44.495 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:44.495 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:44.495 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:44.495 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:44.495 14:09:36 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:45.430 Creating new GPT entries in memory. 00:04:45.430 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:45.430 other utilities. 00:04:45.430 14:09:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:45.430 14:09:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.430 14:09:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:45.430 14:09:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:45.430 14:09:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:46.806 Creating new GPT entries in memory. 00:04:46.806 The operation has completed successfully. 00:04:46.806 14:09:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:46.806 14:09:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:46.806 14:09:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:46.806 14:09:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:46.806 14:09:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:47.743 The operation has completed successfully. 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2351937 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.743 14:09:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.273 14:09:41 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.180 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.440 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.440 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:52.440 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:52.440 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:52.440 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:52.440 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:52.440 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:52.440 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.440 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:52.440 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.440 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:52.440 14:09:44 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:52.440 00:04:52.440 real 0m7.981s 00:04:52.440 user 0m1.622s 00:04:52.440 sys 0m3.164s 00:04:52.440 14:09:44 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.440 14:09:44 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:52.440 ************************************ 00:04:52.440 END TEST dm_mount 00:04:52.440 ************************************ 00:04:52.440 14:09:44 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:52.440 14:09:44 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:52.440 14:09:44 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:52.440 14:09:44 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.440 14:09:44 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.440 14:09:44 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:52.440 14:09:44 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.440 14:09:44 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:52.699 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:52.699 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:52.699 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:52.699 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:52.699 14:09:44 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:52.699 14:09:44 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:52.699 14:09:44 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:52.699 14:09:44 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.699 14:09:44 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:52.699 14:09:44 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.699 14:09:44 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:52.699 00:04:52.699 real 0m21.840s 00:04:52.699 user 0m5.803s 00:04:52.699 sys 0m10.290s 00:04:52.699 14:09:44 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.699 14:09:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:52.699 ************************************ 00:04:52.699 END TEST devices 00:04:52.699 ************************************ 00:04:52.699 14:09:44 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:52.699 00:04:52.699 real 1m13.805s 00:04:52.699 user 0m23.593s 00:04:52.699 sys 0m40.066s 00:04:52.699 14:09:44 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.699 14:09:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:52.699 ************************************ 00:04:52.699 END TEST setup.sh 00:04:52.699 ************************************ 00:04:52.959 14:09:44 -- common/autotest_common.sh@1142 -- # return 0 00:04:52.959 14:09:44 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:55.492 Hugepages 00:04:55.492 node hugesize free / total 00:04:55.492 node0 1048576kB 0 / 0 00:04:55.492 node0 2048kB 2048 / 2048 00:04:55.492 node1 1048576kB 0 / 0 00:04:55.492 node1 2048kB 0 / 0 00:04:55.492 00:04:55.492 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.492 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:55.492 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:55.492 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:55.492 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:55.492 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:55.492 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:55.492 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:55.492 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:55.492 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:55.492 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:55.492 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:55.492 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:55.492 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:55.492 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:55.492 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:55.492 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:55.492 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:55.751 14:09:47 -- spdk/autotest.sh@130 -- # uname -s 00:04:55.751 14:09:47 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:55.751 14:09:47 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:55.751 14:09:47 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.283 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:58.283 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:58.283 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:58.283 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:58.283 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:58.283 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:58.283 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:58.283 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:58.283 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:58.283 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:58.283 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:58.283 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:58.283 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:58.283 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:58.283 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:58.283 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:58.852 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:59.112 14:09:50 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:00.050 14:09:51 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:00.050 14:09:51 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:00.050 14:09:51 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:00.050 14:09:51 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:00.050 14:09:51 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:00.050 14:09:51 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:00.050 14:09:51 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.050 14:09:51 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:00.050 14:09:51 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:00.050 14:09:51 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:00.050 14:09:51 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:05:00.050 14:09:51 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.587 Waiting for block devices as requested 00:05:02.587 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:02.587 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:02.587 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:02.587 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:02.846 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:02.846 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:02.846 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:02.846 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:03.106 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:03.106 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:03.106 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:03.106 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:03.366 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:03.366 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:03.366 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:03.626 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:03.626 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:03.626 14:09:55 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:03.626 14:09:55 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:03.626 14:09:55 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:03.626 14:09:55 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:05:03.626 14:09:55 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:03.626 14:09:55 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:03.626 14:09:55 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:03.626 14:09:55 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:03.626 14:09:55 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:03.626 14:09:55 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:03.626 14:09:55 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:03.626 14:09:55 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:03.626 14:09:55 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:03.626 14:09:55 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:03.626 14:09:55 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:03.626 14:09:55 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:03.626 14:09:55 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:03.626 14:09:55 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:03.626 14:09:55 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:03.626 14:09:55 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:03.626 14:09:55 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:03.626 14:09:55 -- common/autotest_common.sh@1557 -- # continue 00:05:03.626 14:09:55 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:03.626 14:09:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.626 14:09:55 -- common/autotest_common.sh@10 -- # set +x 00:05:03.884 14:09:55 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:03.884 14:09:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.884 14:09:55 -- common/autotest_common.sh@10 -- # set +x 00:05:03.884 14:09:55 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:06.461 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:06.461 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:06.461 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:06.461 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:06.461 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:06.461 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:06.461 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:06.461 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:06.461 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:06.461 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:06.461 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:06.461 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:06.461 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:06.461 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:06.461 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:06.461 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:07.030 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:07.030 14:09:59 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:07.030 14:09:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.030 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:05:07.289 14:09:59 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:07.289 14:09:59 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:07.289 14:09:59 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:07.289 14:09:59 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:07.289 14:09:59 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:07.289 14:09:59 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:07.289 14:09:59 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:07.289 14:09:59 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:07.289 14:09:59 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.289 14:09:59 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:07.289 14:09:59 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:07.289 14:09:59 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:07.289 14:09:59 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:05:07.289 14:09:59 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:07.289 14:09:59 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:07.289 14:09:59 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:07.289 14:09:59 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:07.289 14:09:59 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:07.289 14:09:59 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:05:07.289 14:09:59 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:05:07.289 14:09:59 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2360581 00:05:07.289 14:09:59 -- common/autotest_common.sh@1598 -- # waitforlisten 2360581 00:05:07.289 14:09:59 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.289 14:09:59 -- common/autotest_common.sh@829 -- # '[' -z 2360581 ']' 00:05:07.289 14:09:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.289 14:09:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.289 14:09:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.289 14:09:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.289 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:05:07.289 [2024-07-12 14:09:59.190720] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:05:07.289 [2024-07-12 14:09:59.190762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360581 ] 00:05:07.289 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.289 [2024-07-12 14:09:59.243774] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.549 [2024-07-12 14:09:59.316963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.116 14:09:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.116 14:09:59 -- common/autotest_common.sh@862 -- # return 0 00:05:08.116 14:09:59 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:08.116 14:09:59 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:08.116 14:09:59 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:11.405 nvme0n1 00:05:11.405 14:10:02 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:11.405 [2024-07-12 14:10:03.135855] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:11.405 request: 00:05:11.405 { 00:05:11.405 "nvme_ctrlr_name": "nvme0", 00:05:11.405 "password": "test", 00:05:11.405 "method": "bdev_nvme_opal_revert", 00:05:11.405 "req_id": 1 00:05:11.405 } 00:05:11.405 Got JSON-RPC error response 00:05:11.405 response: 00:05:11.405 { 00:05:11.405 "code": -32602, 00:05:11.405 "message": "Invalid parameters" 00:05:11.405 } 00:05:11.405 14:10:03 -- common/autotest_common.sh@1604 -- # true 00:05:11.405 14:10:03 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:11.405 14:10:03 -- common/autotest_common.sh@1608 -- # killprocess 2360581 00:05:11.405 14:10:03 -- common/autotest_common.sh@948 -- # '[' -z 2360581 ']' 00:05:11.405 14:10:03 -- common/autotest_common.sh@952 -- # kill -0 2360581 00:05:11.405 14:10:03 -- common/autotest_common.sh@953 -- # uname 00:05:11.405 14:10:03 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.405 14:10:03 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2360581 00:05:11.405 14:10:03 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.405 14:10:03 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.405 14:10:03 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2360581' 00:05:11.405 killing process with pid 2360581 00:05:11.405 14:10:03 -- common/autotest_common.sh@967 -- # kill 2360581 00:05:11.405 14:10:03 -- common/autotest_common.sh@972 -- # wait 2360581 00:05:12.781 14:10:04 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:12.781 14:10:04 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:12.781 14:10:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:12.781 14:10:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:12.781 14:10:04 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:12.781 14:10:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.781 14:10:04 -- common/autotest_common.sh@10 -- # set +x 00:05:12.781 14:10:04 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:12.781 14:10:04 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:12.781 14:10:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.781 14:10:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.781 14:10:04 -- common/autotest_common.sh@10 -- # set +x 00:05:13.040 ************************************ 00:05:13.040 START TEST env 00:05:13.040 ************************************ 00:05:13.040 14:10:04 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:13.040 * Looking for test storage... 00:05:13.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:13.040 14:10:04 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:13.040 14:10:04 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.040 14:10:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.040 14:10:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.040 ************************************ 00:05:13.040 START TEST env_memory 00:05:13.040 ************************************ 00:05:13.040 14:10:04 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:13.040 00:05:13.040 00:05:13.040 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.040 http://cunit.sourceforge.net/ 00:05:13.040 00:05:13.040 00:05:13.040 Suite: memory 00:05:13.040 Test: alloc and free memory map ...[2024-07-12 14:10:04.975764] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:13.040 passed 00:05:13.041 Test: mem map translation ...[2024-07-12 14:10:04.995163] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:13.041 [2024-07-12 14:10:04.995177] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:13.041 [2024-07-12 14:10:04.995213] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:13.041 [2024-07-12 14:10:04.995236] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:13.041 passed 00:05:13.041 Test: mem map registration ...[2024-07-12 14:10:05.034499] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:13.041 [2024-07-12 14:10:05.034515] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:13.041 passed 00:05:13.301 Test: mem map adjacent registrations ...passed 00:05:13.301 00:05:13.301 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.301 suites 1 1 n/a 0 0 00:05:13.301 tests 4 4 4 0 0 00:05:13.301 asserts 152 152 152 0 n/a 00:05:13.301 00:05:13.301 Elapsed time = 0.142 seconds 00:05:13.301 00:05:13.301 real 0m0.154s 00:05:13.301 user 0m0.144s 00:05:13.301 sys 0m0.009s 00:05:13.301 14:10:05 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.301 14:10:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:13.301 ************************************ 00:05:13.301 END TEST env_memory 00:05:13.301 ************************************ 00:05:13.301 14:10:05 env -- common/autotest_common.sh@1142 -- # return 0 00:05:13.301 14:10:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:13.301 14:10:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.301 14:10:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.301 14:10:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.301 ************************************ 00:05:13.301 START TEST env_vtophys 00:05:13.301 ************************************ 00:05:13.301 14:10:05 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:13.301 EAL: lib.eal log level changed from notice to debug 00:05:13.301 EAL: Detected lcore 0 as core 0 on socket 0 00:05:13.301 EAL: Detected lcore 1 as core 1 on socket 0 00:05:13.301 EAL: Detected lcore 2 as core 2 on socket 0 00:05:13.301 EAL: Detected lcore 3 as core 3 on socket 0 00:05:13.301 EAL: Detected lcore 4 as core 4 on socket 0 00:05:13.301 EAL: Detected lcore 5 as core 5 on socket 0 00:05:13.301 EAL: Detected lcore 6 as core 6 on socket 0 00:05:13.301 EAL: Detected lcore 7 as core 8 on socket 0 00:05:13.301 EAL: Detected lcore 8 as core 9 on socket 0 00:05:13.301 EAL: Detected lcore 9 as core 10 on socket 0 00:05:13.301 EAL: Detected lcore 10 as core 11 on socket 0 00:05:13.301 EAL: Detected lcore 11 as core 12 on socket 0 00:05:13.301 EAL: Detected lcore 12 as core 13 on socket 0 00:05:13.301 EAL: Detected lcore 13 as core 16 on socket 0 00:05:13.301 EAL: Detected lcore 14 as core 17 on socket 0 00:05:13.301 EAL: Detected lcore 15 as core 18 on socket 0 00:05:13.301 EAL: Detected lcore 16 as core 19 on socket 0 00:05:13.301 EAL: Detected lcore 17 as core 20 on socket 0 00:05:13.301 EAL: Detected lcore 18 as core 21 on socket 0 00:05:13.301 EAL: Detected lcore 19 as core 25 on socket 0 00:05:13.301 EAL: Detected lcore 20 as core 26 on socket 0 00:05:13.301 EAL: Detected lcore 21 as core 27 on socket 0 00:05:13.301 EAL: Detected lcore 22 as core 28 on socket 0 00:05:13.301 EAL: Detected lcore 23 as core 29 on socket 0 00:05:13.301 EAL: Detected lcore 24 as core 0 on socket 1 00:05:13.301 EAL: Detected lcore 25 as core 1 on socket 1 00:05:13.301 EAL: Detected lcore 26 as core 2 on socket 1 00:05:13.301 EAL: Detected lcore 27 as core 3 on socket 1 00:05:13.301 EAL: Detected lcore 28 as core 4 on socket 1 00:05:13.301 EAL: Detected lcore 29 as core 5 on socket 1 00:05:13.301 EAL: Detected lcore 30 as core 6 on socket 1 00:05:13.301 EAL: Detected lcore 31 as core 9 on socket 1 00:05:13.301 EAL: Detected lcore 32 as core 10 on socket 1 00:05:13.301 EAL: Detected lcore 33 as core 11 on socket 1 00:05:13.301 EAL: Detected lcore 34 as core 12 on socket 1 00:05:13.301 EAL: Detected lcore 35 as core 13 on socket 1 00:05:13.301 EAL: Detected lcore 36 as core 16 on socket 1 00:05:13.301 EAL: Detected lcore 37 as core 17 on socket 1 00:05:13.301 EAL: Detected lcore 38 as core 18 on socket 1 00:05:13.301 EAL: Detected lcore 39 as core 19 on socket 1 00:05:13.301 EAL: Detected lcore 40 as core 20 on socket 1 00:05:13.301 EAL: Detected lcore 41 as core 21 on socket 1 00:05:13.301 EAL: Detected lcore 42 as core 24 on socket 1 00:05:13.301 EAL: Detected lcore 43 as core 25 on socket 1 00:05:13.301 EAL: Detected lcore 44 as core 26 on socket 1 00:05:13.301 EAL: Detected lcore 45 as core 27 on socket 1 00:05:13.301 EAL: Detected lcore 46 as core 28 on socket 1 00:05:13.301 EAL: Detected lcore 47 as core 29 on socket 1 00:05:13.301 EAL: Detected lcore 48 as core 0 on socket 0 00:05:13.301 EAL: Detected lcore 49 as core 1 on socket 0 00:05:13.301 EAL: Detected lcore 50 as core 2 on socket 0 00:05:13.301 EAL: Detected lcore 51 as core 3 on socket 0 00:05:13.301 EAL: Detected lcore 52 as core 4 on socket 0 00:05:13.301 EAL: Detected lcore 53 as core 5 on socket 0 00:05:13.301 EAL: Detected lcore 54 as core 6 on socket 0 00:05:13.301 EAL: Detected lcore 55 as core 8 on socket 0 00:05:13.301 EAL: Detected lcore 56 as core 9 on socket 0 00:05:13.301 EAL: Detected lcore 57 as core 10 on socket 0 00:05:13.301 EAL: Detected lcore 58 as core 11 on socket 0 00:05:13.301 EAL: Detected lcore 59 as core 12 on socket 0 00:05:13.301 EAL: Detected lcore 60 as core 13 on socket 0 00:05:13.301 EAL: Detected lcore 61 as core 16 on socket 0 00:05:13.301 EAL: Detected lcore 62 as core 17 on socket 0 00:05:13.301 EAL: Detected lcore 63 as core 18 on socket 0 00:05:13.301 EAL: Detected lcore 64 as core 19 on socket 0 00:05:13.301 EAL: Detected lcore 65 as core 20 on socket 0 00:05:13.301 EAL: Detected lcore 66 as core 21 on socket 0 00:05:13.301 EAL: Detected lcore 67 as core 25 on socket 0 00:05:13.301 EAL: Detected lcore 68 as core 26 on socket 0 00:05:13.301 EAL: Detected lcore 69 as core 27 on socket 0 00:05:13.301 EAL: Detected lcore 70 as core 28 on socket 0 00:05:13.301 EAL: Detected lcore 71 as core 29 on socket 0 00:05:13.301 EAL: Detected lcore 72 as core 0 on socket 1 00:05:13.301 EAL: Detected lcore 73 as core 1 on socket 1 00:05:13.301 EAL: Detected lcore 74 as core 2 on socket 1 00:05:13.301 EAL: Detected lcore 75 as core 3 on socket 1 00:05:13.301 EAL: Detected lcore 76 as core 4 on socket 1 00:05:13.301 EAL: Detected lcore 77 as core 5 on socket 1 00:05:13.301 EAL: Detected lcore 78 as core 6 on socket 1 00:05:13.301 EAL: Detected lcore 79 as core 9 on socket 1 00:05:13.301 EAL: Detected lcore 80 as core 10 on socket 1 00:05:13.301 EAL: Detected lcore 81 as core 11 on socket 1 00:05:13.301 EAL: Detected lcore 82 as core 12 on socket 1 00:05:13.301 EAL: Detected lcore 83 as core 13 on socket 1 00:05:13.301 EAL: Detected lcore 84 as core 16 on socket 1 00:05:13.301 EAL: Detected lcore 85 as core 17 on socket 1 00:05:13.301 EAL: Detected lcore 86 as core 18 on socket 1 00:05:13.301 EAL: Detected lcore 87 as core 19 on socket 1 00:05:13.301 EAL: Detected lcore 88 as core 20 on socket 1 00:05:13.301 EAL: Detected lcore 89 as core 21 on socket 1 00:05:13.301 EAL: Detected lcore 90 as core 24 on socket 1 00:05:13.301 EAL: Detected lcore 91 as core 25 on socket 1 00:05:13.301 EAL: Detected lcore 92 as core 26 on socket 1 00:05:13.301 EAL: Detected lcore 93 as core 27 on socket 1 00:05:13.301 EAL: Detected lcore 94 as core 28 on socket 1 00:05:13.301 EAL: Detected lcore 95 as core 29 on socket 1 00:05:13.301 EAL: Maximum logical cores by configuration: 128 00:05:13.301 EAL: Detected CPU lcores: 96 00:05:13.301 EAL: Detected NUMA nodes: 2 00:05:13.301 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:13.301 EAL: Detected shared linkage of DPDK 00:05:13.301 EAL: No shared files mode enabled, IPC will be disabled 00:05:13.301 EAL: Bus pci wants IOVA as 'DC' 00:05:13.301 EAL: Buses did not request a specific IOVA mode. 00:05:13.301 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:13.301 EAL: Selected IOVA mode 'VA' 00:05:13.301 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.301 EAL: Probing VFIO support... 00:05:13.301 EAL: IOMMU type 1 (Type 1) is supported 00:05:13.301 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:13.301 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:13.301 EAL: VFIO support initialized 00:05:13.301 EAL: Ask a virtual area of 0x2e000 bytes 00:05:13.301 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:13.301 EAL: Setting up physically contiguous memory... 00:05:13.301 EAL: Setting maximum number of open files to 524288 00:05:13.301 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:13.301 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:13.301 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:13.301 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.301 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:13.301 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.301 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.301 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:13.301 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:13.301 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.301 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:13.301 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.301 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.301 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:13.301 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:13.301 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.301 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:13.301 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.301 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.301 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:13.301 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:13.301 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.301 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:13.301 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.301 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.301 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:13.301 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:13.301 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:13.301 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.301 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:13.301 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:13.301 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.301 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:13.301 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:13.302 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.302 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:13.302 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:13.302 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.302 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:13.302 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:13.302 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.302 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:13.302 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:13.302 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.302 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:13.302 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:13.302 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.302 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:13.302 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:13.302 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.302 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:13.302 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:13.302 EAL: Hugepages will be freed exactly as allocated. 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: TSC frequency is ~2300000 KHz 00:05:13.302 EAL: Main lcore 0 is ready (tid=7ff55ecc3a00;cpuset=[0]) 00:05:13.302 EAL: Trying to obtain current memory policy. 00:05:13.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.302 EAL: Restoring previous memory policy: 0 00:05:13.302 EAL: request: mp_malloc_sync 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: Heap on socket 0 was expanded by 2MB 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:13.302 EAL: Mem event callback 'spdk:(nil)' registered 00:05:13.302 00:05:13.302 00:05:13.302 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.302 http://cunit.sourceforge.net/ 00:05:13.302 00:05:13.302 00:05:13.302 Suite: components_suite 00:05:13.302 Test: vtophys_malloc_test ...passed 00:05:13.302 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:13.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.302 EAL: Restoring previous memory policy: 4 00:05:13.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.302 EAL: request: mp_malloc_sync 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: Heap on socket 0 was expanded by 4MB 00:05:13.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.302 EAL: request: mp_malloc_sync 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: Heap on socket 0 was shrunk by 4MB 00:05:13.302 EAL: Trying to obtain current memory policy. 00:05:13.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.302 EAL: Restoring previous memory policy: 4 00:05:13.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.302 EAL: request: mp_malloc_sync 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: Heap on socket 0 was expanded by 6MB 00:05:13.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.302 EAL: request: mp_malloc_sync 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: Heap on socket 0 was shrunk by 6MB 00:05:13.302 EAL: Trying to obtain current memory policy. 00:05:13.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.302 EAL: Restoring previous memory policy: 4 00:05:13.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.302 EAL: request: mp_malloc_sync 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: Heap on socket 0 was expanded by 10MB 00:05:13.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.302 EAL: request: mp_malloc_sync 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: Heap on socket 0 was shrunk by 10MB 00:05:13.302 EAL: Trying to obtain current memory policy. 00:05:13.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.302 EAL: Restoring previous memory policy: 4 00:05:13.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.302 EAL: request: mp_malloc_sync 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: Heap on socket 0 was expanded by 18MB 00:05:13.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.302 EAL: request: mp_malloc_sync 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: Heap on socket 0 was shrunk by 18MB 00:05:13.302 EAL: Trying to obtain current memory policy. 00:05:13.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.302 EAL: Restoring previous memory policy: 4 00:05:13.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.302 EAL: request: mp_malloc_sync 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: Heap on socket 0 was expanded by 34MB 00:05:13.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.302 EAL: request: mp_malloc_sync 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: Heap on socket 0 was shrunk by 34MB 00:05:13.302 EAL: Trying to obtain current memory policy. 00:05:13.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.302 EAL: Restoring previous memory policy: 4 00:05:13.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.302 EAL: request: mp_malloc_sync 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: Heap on socket 0 was expanded by 66MB 00:05:13.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.302 EAL: request: mp_malloc_sync 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: Heap on socket 0 was shrunk by 66MB 00:05:13.302 EAL: Trying to obtain current memory policy. 00:05:13.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.302 EAL: Restoring previous memory policy: 4 00:05:13.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.302 EAL: request: mp_malloc_sync 00:05:13.302 EAL: No shared files mode enabled, IPC is disabled 00:05:13.302 EAL: Heap on socket 0 was expanded by 130MB 00:05:13.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.561 EAL: request: mp_malloc_sync 00:05:13.561 EAL: No shared files mode enabled, IPC is disabled 00:05:13.561 EAL: Heap on socket 0 was shrunk by 130MB 00:05:13.561 EAL: Trying to obtain current memory policy. 00:05:13.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.561 EAL: Restoring previous memory policy: 4 00:05:13.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.561 EAL: request: mp_malloc_sync 00:05:13.561 EAL: No shared files mode enabled, IPC is disabled 00:05:13.561 EAL: Heap on socket 0 was expanded by 258MB 00:05:13.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.561 EAL: request: mp_malloc_sync 00:05:13.561 EAL: No shared files mode enabled, IPC is disabled 00:05:13.561 EAL: Heap on socket 0 was shrunk by 258MB 00:05:13.561 EAL: Trying to obtain current memory policy. 00:05:13.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.561 EAL: Restoring previous memory policy: 4 00:05:13.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.561 EAL: request: mp_malloc_sync 00:05:13.561 EAL: No shared files mode enabled, IPC is disabled 00:05:13.561 EAL: Heap on socket 0 was expanded by 514MB 00:05:13.820 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.820 EAL: request: mp_malloc_sync 00:05:13.820 EAL: No shared files mode enabled, IPC is disabled 00:05:13.820 EAL: Heap on socket 0 was shrunk by 514MB 00:05:13.820 EAL: Trying to obtain current memory policy. 00:05:13.820 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.079 EAL: Restoring previous memory policy: 4 00:05:14.079 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.079 EAL: request: mp_malloc_sync 00:05:14.079 EAL: No shared files mode enabled, IPC is disabled 00:05:14.079 EAL: Heap on socket 0 was expanded by 1026MB 00:05:14.079 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.338 EAL: request: mp_malloc_sync 00:05:14.338 EAL: No shared files mode enabled, IPC is disabled 00:05:14.338 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:14.339 passed 00:05:14.339 00:05:14.339 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.339 suites 1 1 n/a 0 0 00:05:14.339 tests 2 2 2 0 0 00:05:14.339 asserts 497 497 497 0 n/a 00:05:14.339 00:05:14.339 Elapsed time = 0.961 seconds 00:05:14.339 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.339 EAL: request: mp_malloc_sync 00:05:14.339 EAL: No shared files mode enabled, IPC is disabled 00:05:14.339 EAL: Heap on socket 0 was shrunk by 2MB 00:05:14.339 EAL: No shared files mode enabled, IPC is disabled 00:05:14.339 EAL: No shared files mode enabled, IPC is disabled 00:05:14.339 EAL: No shared files mode enabled, IPC is disabled 00:05:14.339 00:05:14.339 real 0m1.072s 00:05:14.339 user 0m0.630s 00:05:14.339 sys 0m0.412s 00:05:14.339 14:10:06 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.339 14:10:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:14.339 ************************************ 00:05:14.339 END TEST env_vtophys 00:05:14.339 ************************************ 00:05:14.339 14:10:06 env -- common/autotest_common.sh@1142 -- # return 0 00:05:14.339 14:10:06 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:14.339 14:10:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.339 14:10:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.339 14:10:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.339 ************************************ 00:05:14.339 START TEST env_pci 00:05:14.339 ************************************ 00:05:14.339 14:10:06 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:14.339 00:05:14.339 00:05:14.339 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.339 http://cunit.sourceforge.net/ 00:05:14.339 00:05:14.339 00:05:14.339 Suite: pci 00:05:14.339 Test: pci_hook ...[2024-07-12 14:10:06.308895] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2361994 has claimed it 00:05:14.339 EAL: Cannot find device (10000:00:01.0) 00:05:14.339 EAL: Failed to attach device on primary process 00:05:14.339 passed 00:05:14.339 00:05:14.339 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.339 suites 1 1 n/a 0 0 00:05:14.339 tests 1 1 1 0 0 00:05:14.339 asserts 25 25 25 0 n/a 00:05:14.339 00:05:14.339 Elapsed time = 0.026 seconds 00:05:14.339 00:05:14.339 real 0m0.046s 00:05:14.339 user 0m0.016s 00:05:14.339 sys 0m0.030s 00:05:14.339 14:10:06 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.339 14:10:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:14.339 ************************************ 00:05:14.339 END TEST env_pci 00:05:14.339 ************************************ 00:05:14.598 14:10:06 env -- common/autotest_common.sh@1142 -- # return 0 00:05:14.598 14:10:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:14.598 14:10:06 env -- env/env.sh@15 -- # uname 00:05:14.598 14:10:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:14.598 14:10:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:14.598 14:10:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:14.598 14:10:06 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:14.598 14:10:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.598 14:10:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.598 ************************************ 00:05:14.598 START TEST env_dpdk_post_init 00:05:14.598 ************************************ 00:05:14.598 14:10:06 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:14.598 EAL: Detected CPU lcores: 96 00:05:14.598 EAL: Detected NUMA nodes: 2 00:05:14.598 EAL: Detected shared linkage of DPDK 00:05:14.598 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:14.598 EAL: Selected IOVA mode 'VA' 00:05:14.598 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.598 EAL: VFIO support initialized 00:05:14.598 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:14.599 EAL: Using IOMMU type 1 (Type 1) 00:05:14.599 EAL: Ignore mapping IO port bar(1) 00:05:14.599 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:14.599 EAL: Ignore mapping IO port bar(1) 00:05:14.599 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:14.599 EAL: Ignore mapping IO port bar(1) 00:05:14.599 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:14.599 EAL: Ignore mapping IO port bar(1) 00:05:14.599 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:14.599 EAL: Ignore mapping IO port bar(1) 00:05:14.599 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:14.599 EAL: Ignore mapping IO port bar(1) 00:05:14.599 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:14.599 EAL: Ignore mapping IO port bar(1) 00:05:14.599 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:14.599 EAL: Ignore mapping IO port bar(1) 00:05:14.599 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:15.536 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:15.536 EAL: Ignore mapping IO port bar(1) 00:05:15.536 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:15.536 EAL: Ignore mapping IO port bar(1) 00:05:15.536 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:15.536 EAL: Ignore mapping IO port bar(1) 00:05:15.536 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:15.536 EAL: Ignore mapping IO port bar(1) 00:05:15.536 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:15.536 EAL: Ignore mapping IO port bar(1) 00:05:15.536 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:15.536 EAL: Ignore mapping IO port bar(1) 00:05:15.536 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:15.536 EAL: Ignore mapping IO port bar(1) 00:05:15.536 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:15.536 EAL: Ignore mapping IO port bar(1) 00:05:15.536 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:18.825 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:18.825 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:18.825 Starting DPDK initialization... 00:05:18.825 Starting SPDK post initialization... 00:05:18.825 SPDK NVMe probe 00:05:18.825 Attaching to 0000:5e:00.0 00:05:18.825 Attached to 0000:5e:00.0 00:05:18.825 Cleaning up... 00:05:18.825 00:05:18.825 real 0m4.337s 00:05:18.825 user 0m3.309s 00:05:18.825 sys 0m0.107s 00:05:18.825 14:10:10 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.825 14:10:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.825 ************************************ 00:05:18.825 END TEST env_dpdk_post_init 00:05:18.825 ************************************ 00:05:18.825 14:10:10 env -- common/autotest_common.sh@1142 -- # return 0 00:05:18.825 14:10:10 env -- env/env.sh@26 -- # uname 00:05:18.825 14:10:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:18.825 14:10:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.825 14:10:10 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.826 14:10:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.826 14:10:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.826 ************************************ 00:05:18.826 START TEST env_mem_callbacks 00:05:18.826 ************************************ 00:05:18.826 14:10:10 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:19.086 EAL: Detected CPU lcores: 96 00:05:19.086 EAL: Detected NUMA nodes: 2 00:05:19.086 EAL: Detected shared linkage of DPDK 00:05:19.086 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:19.086 EAL: Selected IOVA mode 'VA' 00:05:19.086 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.086 EAL: VFIO support initialized 00:05:19.086 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:19.086 00:05:19.086 00:05:19.086 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.086 http://cunit.sourceforge.net/ 00:05:19.086 00:05:19.086 00:05:19.086 Suite: memory 00:05:19.086 Test: test ... 00:05:19.086 register 0x200000200000 2097152 00:05:19.086 malloc 3145728 00:05:19.086 register 0x200000400000 4194304 00:05:19.086 buf 0x200000500000 len 3145728 PASSED 00:05:19.086 malloc 64 00:05:19.086 buf 0x2000004fff40 len 64 PASSED 00:05:19.086 malloc 4194304 00:05:19.086 register 0x200000800000 6291456 00:05:19.086 buf 0x200000a00000 len 4194304 PASSED 00:05:19.086 free 0x200000500000 3145728 00:05:19.086 free 0x2000004fff40 64 00:05:19.086 unregister 0x200000400000 4194304 PASSED 00:05:19.086 free 0x200000a00000 4194304 00:05:19.086 unregister 0x200000800000 6291456 PASSED 00:05:19.086 malloc 8388608 00:05:19.086 register 0x200000400000 10485760 00:05:19.086 buf 0x200000600000 len 8388608 PASSED 00:05:19.086 free 0x200000600000 8388608 00:05:19.086 unregister 0x200000400000 10485760 PASSED 00:05:19.086 passed 00:05:19.086 00:05:19.086 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.086 suites 1 1 n/a 0 0 00:05:19.086 tests 1 1 1 0 0 00:05:19.086 asserts 15 15 15 0 n/a 00:05:19.086 00:05:19.086 Elapsed time = 0.005 seconds 00:05:19.086 00:05:19.086 real 0m0.056s 00:05:19.086 user 0m0.022s 00:05:19.086 sys 0m0.034s 00:05:19.086 14:10:10 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.086 14:10:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:19.086 ************************************ 00:05:19.086 END TEST env_mem_callbacks 00:05:19.086 ************************************ 00:05:19.086 14:10:10 env -- common/autotest_common.sh@1142 -- # return 0 00:05:19.086 00:05:19.086 real 0m6.086s 00:05:19.086 user 0m4.290s 00:05:19.086 sys 0m0.876s 00:05:19.086 14:10:10 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.086 14:10:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.086 ************************************ 00:05:19.086 END TEST env 00:05:19.086 ************************************ 00:05:19.086 14:10:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:19.086 14:10:10 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:19.086 14:10:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.086 14:10:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.086 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:05:19.086 ************************************ 00:05:19.086 START TEST rpc 00:05:19.086 ************************************ 00:05:19.086 14:10:10 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:19.086 * Looking for test storage... 00:05:19.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:19.086 14:10:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2362833 00:05:19.086 14:10:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.086 14:10:11 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:19.086 14:10:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2362833 00:05:19.086 14:10:11 rpc -- common/autotest_common.sh@829 -- # '[' -z 2362833 ']' 00:05:19.086 14:10:11 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.086 14:10:11 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.086 14:10:11 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.086 14:10:11 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.086 14:10:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.345 [2024-07-12 14:10:11.110487] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:05:19.345 [2024-07-12 14:10:11.110532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362833 ] 00:05:19.345 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.345 [2024-07-12 14:10:11.163486] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.345 [2024-07-12 14:10:11.237995] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:19.345 [2024-07-12 14:10:11.238034] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2362833' to capture a snapshot of events at runtime. 00:05:19.345 [2024-07-12 14:10:11.238041] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:19.345 [2024-07-12 14:10:11.238046] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:19.345 [2024-07-12 14:10:11.238051] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2362833 for offline analysis/debug. 00:05:19.346 [2024-07-12 14:10:11.238087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.913 14:10:11 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.913 14:10:11 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:19.913 14:10:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:19.913 14:10:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:19.913 14:10:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:19.913 14:10:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:19.913 14:10:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.913 14:10:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.913 14:10:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.172 ************************************ 00:05:20.172 START TEST rpc_integrity 00:05:20.172 ************************************ 00:05:20.172 14:10:11 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:20.172 14:10:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:20.172 14:10:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.172 14:10:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.172 14:10:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.172 14:10:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:20.172 14:10:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:20.172 14:10:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:20.172 14:10:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:20.172 14:10:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.172 14:10:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.172 14:10:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.172 14:10:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:20.172 14:10:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:20.172 14:10:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.172 14:10:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.172 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.172 14:10:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:20.172 { 00:05:20.172 "name": "Malloc0", 00:05:20.172 "aliases": [ 00:05:20.172 "0dc4ee3d-0479-4146-877b-e10f258f898b" 00:05:20.172 ], 00:05:20.172 "product_name": "Malloc disk", 00:05:20.172 "block_size": 512, 00:05:20.172 "num_blocks": 16384, 00:05:20.172 "uuid": "0dc4ee3d-0479-4146-877b-e10f258f898b", 00:05:20.172 "assigned_rate_limits": { 00:05:20.172 "rw_ios_per_sec": 0, 00:05:20.172 "rw_mbytes_per_sec": 0, 00:05:20.172 "r_mbytes_per_sec": 0, 00:05:20.172 "w_mbytes_per_sec": 0 00:05:20.172 }, 00:05:20.172 "claimed": false, 00:05:20.172 "zoned": false, 00:05:20.172 "supported_io_types": { 00:05:20.172 "read": true, 00:05:20.172 "write": true, 00:05:20.172 "unmap": true, 00:05:20.172 "flush": true, 00:05:20.172 "reset": true, 00:05:20.172 "nvme_admin": false, 00:05:20.172 "nvme_io": false, 00:05:20.172 "nvme_io_md": false, 00:05:20.172 "write_zeroes": true, 00:05:20.172 "zcopy": true, 00:05:20.172 "get_zone_info": false, 00:05:20.172 "zone_management": false, 00:05:20.172 "zone_append": false, 00:05:20.172 "compare": false, 00:05:20.172 "compare_and_write": false, 00:05:20.172 "abort": true, 00:05:20.172 "seek_hole": false, 00:05:20.172 "seek_data": false, 00:05:20.172 "copy": true, 00:05:20.172 "nvme_iov_md": false 00:05:20.172 }, 00:05:20.172 "memory_domains": [ 00:05:20.172 { 00:05:20.172 "dma_device_id": "system", 00:05:20.172 "dma_device_type": 1 00:05:20.172 }, 00:05:20.172 { 00:05:20.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.172 "dma_device_type": 2 00:05:20.172 } 00:05:20.172 ], 00:05:20.172 "driver_specific": {} 00:05:20.172 } 00:05:20.172 ]' 00:05:20.172 14:10:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:20.172 14:10:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:20.172 14:10:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:20.172 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.172 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.172 [2024-07-12 14:10:12.055120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:20.172 [2024-07-12 14:10:12.055151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:20.172 [2024-07-12 14:10:12.055162] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9572d0 00:05:20.172 [2024-07-12 14:10:12.055168] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:20.172 [2024-07-12 14:10:12.056248] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:20.172 [2024-07-12 14:10:12.056268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:20.172 Passthru0 00:05:20.172 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.172 14:10:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:20.172 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.172 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.172 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.172 14:10:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:20.172 { 00:05:20.172 "name": "Malloc0", 00:05:20.172 "aliases": [ 00:05:20.172 "0dc4ee3d-0479-4146-877b-e10f258f898b" 00:05:20.172 ], 00:05:20.172 "product_name": "Malloc disk", 00:05:20.172 "block_size": 512, 00:05:20.172 "num_blocks": 16384, 00:05:20.172 "uuid": "0dc4ee3d-0479-4146-877b-e10f258f898b", 00:05:20.172 "assigned_rate_limits": { 00:05:20.172 "rw_ios_per_sec": 0, 00:05:20.172 "rw_mbytes_per_sec": 0, 00:05:20.173 "r_mbytes_per_sec": 0, 00:05:20.173 "w_mbytes_per_sec": 0 00:05:20.173 }, 00:05:20.173 "claimed": true, 00:05:20.173 "claim_type": "exclusive_write", 00:05:20.173 "zoned": false, 00:05:20.173 "supported_io_types": { 00:05:20.173 "read": true, 00:05:20.173 "write": true, 00:05:20.173 "unmap": true, 00:05:20.173 "flush": true, 00:05:20.173 "reset": true, 00:05:20.173 "nvme_admin": false, 00:05:20.173 "nvme_io": false, 00:05:20.173 "nvme_io_md": false, 00:05:20.173 "write_zeroes": true, 00:05:20.173 "zcopy": true, 00:05:20.173 "get_zone_info": false, 00:05:20.173 "zone_management": false, 00:05:20.173 "zone_append": false, 00:05:20.173 "compare": false, 00:05:20.173 "compare_and_write": false, 00:05:20.173 "abort": true, 00:05:20.173 "seek_hole": false, 00:05:20.173 "seek_data": false, 00:05:20.173 "copy": true, 00:05:20.173 "nvme_iov_md": false 00:05:20.173 }, 00:05:20.173 "memory_domains": [ 00:05:20.173 { 00:05:20.173 "dma_device_id": "system", 00:05:20.173 "dma_device_type": 1 00:05:20.173 }, 00:05:20.173 { 00:05:20.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.173 "dma_device_type": 2 00:05:20.173 } 00:05:20.173 ], 00:05:20.173 "driver_specific": {} 00:05:20.173 }, 00:05:20.173 { 00:05:20.173 "name": "Passthru0", 00:05:20.173 "aliases": [ 00:05:20.173 "154b5fd7-117a-57d2-9677-badbc6c6f356" 00:05:20.173 ], 00:05:20.173 "product_name": "passthru", 00:05:20.173 "block_size": 512, 00:05:20.173 "num_blocks": 16384, 00:05:20.173 "uuid": "154b5fd7-117a-57d2-9677-badbc6c6f356", 00:05:20.173 "assigned_rate_limits": { 00:05:20.173 "rw_ios_per_sec": 0, 00:05:20.173 "rw_mbytes_per_sec": 0, 00:05:20.173 "r_mbytes_per_sec": 0, 00:05:20.173 "w_mbytes_per_sec": 0 00:05:20.173 }, 00:05:20.173 "claimed": false, 00:05:20.173 "zoned": false, 00:05:20.173 "supported_io_types": { 00:05:20.173 "read": true, 00:05:20.173 "write": true, 00:05:20.173 "unmap": true, 00:05:20.173 "flush": true, 00:05:20.173 "reset": true, 00:05:20.173 "nvme_admin": false, 00:05:20.173 "nvme_io": false, 00:05:20.173 "nvme_io_md": false, 00:05:20.173 "write_zeroes": true, 00:05:20.173 "zcopy": true, 00:05:20.173 "get_zone_info": false, 00:05:20.173 "zone_management": false, 00:05:20.173 "zone_append": false, 00:05:20.173 "compare": false, 00:05:20.173 "compare_and_write": false, 00:05:20.173 "abort": true, 00:05:20.173 "seek_hole": false, 00:05:20.173 "seek_data": false, 00:05:20.173 "copy": true, 00:05:20.173 "nvme_iov_md": false 00:05:20.173 }, 00:05:20.173 "memory_domains": [ 00:05:20.173 { 00:05:20.173 "dma_device_id": "system", 00:05:20.173 "dma_device_type": 1 00:05:20.173 }, 00:05:20.173 { 00:05:20.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.173 "dma_device_type": 2 00:05:20.173 } 00:05:20.173 ], 00:05:20.173 "driver_specific": { 00:05:20.173 "passthru": { 00:05:20.173 "name": "Passthru0", 00:05:20.173 "base_bdev_name": "Malloc0" 00:05:20.173 } 00:05:20.173 } 00:05:20.173 } 00:05:20.173 ]' 00:05:20.173 14:10:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:20.173 14:10:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:20.173 14:10:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:20.173 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.173 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.173 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.173 14:10:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:20.173 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.173 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.173 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.173 14:10:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:20.173 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.173 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.173 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.173 14:10:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:20.173 14:10:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:20.432 14:10:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:20.432 00:05:20.432 real 0m0.279s 00:05:20.432 user 0m0.167s 00:05:20.432 sys 0m0.045s 00:05:20.432 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.432 14:10:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.432 ************************************ 00:05:20.432 END TEST rpc_integrity 00:05:20.432 ************************************ 00:05:20.432 14:10:12 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:20.432 14:10:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:20.432 14:10:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.432 14:10:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.432 14:10:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.432 ************************************ 00:05:20.432 START TEST rpc_plugins 00:05:20.432 ************************************ 00:05:20.432 14:10:12 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:20.432 14:10:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:20.432 14:10:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.432 14:10:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.432 14:10:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.432 14:10:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:20.432 14:10:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:20.432 14:10:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.432 14:10:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.432 14:10:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.432 14:10:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:20.432 { 00:05:20.432 "name": "Malloc1", 00:05:20.432 "aliases": [ 00:05:20.432 "65edc34b-b976-493a-affa-f64db4e7f15a" 00:05:20.432 ], 00:05:20.432 "product_name": "Malloc disk", 00:05:20.432 "block_size": 4096, 00:05:20.432 "num_blocks": 256, 00:05:20.432 "uuid": "65edc34b-b976-493a-affa-f64db4e7f15a", 00:05:20.432 "assigned_rate_limits": { 00:05:20.432 "rw_ios_per_sec": 0, 00:05:20.432 "rw_mbytes_per_sec": 0, 00:05:20.432 "r_mbytes_per_sec": 0, 00:05:20.432 "w_mbytes_per_sec": 0 00:05:20.432 }, 00:05:20.432 "claimed": false, 00:05:20.432 "zoned": false, 00:05:20.432 "supported_io_types": { 00:05:20.432 "read": true, 00:05:20.432 "write": true, 00:05:20.432 "unmap": true, 00:05:20.432 "flush": true, 00:05:20.432 "reset": true, 00:05:20.432 "nvme_admin": false, 00:05:20.432 "nvme_io": false, 00:05:20.432 "nvme_io_md": false, 00:05:20.432 "write_zeroes": true, 00:05:20.432 "zcopy": true, 00:05:20.432 "get_zone_info": false, 00:05:20.432 "zone_management": false, 00:05:20.432 "zone_append": false, 00:05:20.432 "compare": false, 00:05:20.432 "compare_and_write": false, 00:05:20.432 "abort": true, 00:05:20.432 "seek_hole": false, 00:05:20.432 "seek_data": false, 00:05:20.432 "copy": true, 00:05:20.432 "nvme_iov_md": false 00:05:20.432 }, 00:05:20.432 "memory_domains": [ 00:05:20.432 { 00:05:20.432 "dma_device_id": "system", 00:05:20.432 "dma_device_type": 1 00:05:20.432 }, 00:05:20.432 { 00:05:20.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.432 "dma_device_type": 2 00:05:20.432 } 00:05:20.432 ], 00:05:20.432 "driver_specific": {} 00:05:20.432 } 00:05:20.432 ]' 00:05:20.432 14:10:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:20.432 14:10:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:20.432 14:10:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:20.432 14:10:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.432 14:10:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.432 14:10:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.432 14:10:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:20.432 14:10:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.432 14:10:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.432 14:10:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.432 14:10:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:20.432 14:10:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:20.432 14:10:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:20.432 00:05:20.432 real 0m0.137s 00:05:20.432 user 0m0.091s 00:05:20.432 sys 0m0.013s 00:05:20.432 14:10:12 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.432 14:10:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.432 ************************************ 00:05:20.432 END TEST rpc_plugins 00:05:20.432 ************************************ 00:05:20.432 14:10:12 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:20.432 14:10:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:20.432 14:10:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.432 14:10:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.432 14:10:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.691 ************************************ 00:05:20.691 START TEST rpc_trace_cmd_test 00:05:20.691 ************************************ 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:20.691 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2362833", 00:05:20.691 "tpoint_group_mask": "0x8", 00:05:20.691 "iscsi_conn": { 00:05:20.691 "mask": "0x2", 00:05:20.691 "tpoint_mask": "0x0" 00:05:20.691 }, 00:05:20.691 "scsi": { 00:05:20.691 "mask": "0x4", 00:05:20.691 "tpoint_mask": "0x0" 00:05:20.691 }, 00:05:20.691 "bdev": { 00:05:20.691 "mask": "0x8", 00:05:20.691 "tpoint_mask": "0xffffffffffffffff" 00:05:20.691 }, 00:05:20.691 "nvmf_rdma": { 00:05:20.691 "mask": "0x10", 00:05:20.691 "tpoint_mask": "0x0" 00:05:20.691 }, 00:05:20.691 "nvmf_tcp": { 00:05:20.691 "mask": "0x20", 00:05:20.691 "tpoint_mask": "0x0" 00:05:20.691 }, 00:05:20.691 "ftl": { 00:05:20.691 "mask": "0x40", 00:05:20.691 "tpoint_mask": "0x0" 00:05:20.691 }, 00:05:20.691 "blobfs": { 00:05:20.691 "mask": "0x80", 00:05:20.691 "tpoint_mask": "0x0" 00:05:20.691 }, 00:05:20.691 "dsa": { 00:05:20.691 "mask": "0x200", 00:05:20.691 "tpoint_mask": "0x0" 00:05:20.691 }, 00:05:20.691 "thread": { 00:05:20.691 "mask": "0x400", 00:05:20.691 "tpoint_mask": "0x0" 00:05:20.691 }, 00:05:20.691 "nvme_pcie": { 00:05:20.691 "mask": "0x800", 00:05:20.691 "tpoint_mask": "0x0" 00:05:20.691 }, 00:05:20.691 "iaa": { 00:05:20.691 "mask": "0x1000", 00:05:20.691 "tpoint_mask": "0x0" 00:05:20.691 }, 00:05:20.691 "nvme_tcp": { 00:05:20.691 "mask": "0x2000", 00:05:20.691 "tpoint_mask": "0x0" 00:05:20.691 }, 00:05:20.691 "bdev_nvme": { 00:05:20.691 "mask": "0x4000", 00:05:20.691 "tpoint_mask": "0x0" 00:05:20.691 }, 00:05:20.691 "sock": { 00:05:20.691 "mask": "0x8000", 00:05:20.691 "tpoint_mask": "0x0" 00:05:20.691 } 00:05:20.691 }' 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:20.691 14:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:20.691 00:05:20.691 real 0m0.223s 00:05:20.692 user 0m0.186s 00:05:20.692 sys 0m0.027s 00:05:20.692 14:10:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.692 14:10:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.692 ************************************ 00:05:20.692 END TEST rpc_trace_cmd_test 00:05:20.692 ************************************ 00:05:20.951 14:10:12 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:20.951 14:10:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:20.951 14:10:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:20.951 14:10:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:20.951 14:10:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.951 14:10:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.951 14:10:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.951 ************************************ 00:05:20.951 START TEST rpc_daemon_integrity 00:05:20.951 ************************************ 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:20.951 { 00:05:20.951 "name": "Malloc2", 00:05:20.951 "aliases": [ 00:05:20.951 "3c7e62ce-ffa8-4ac7-ac8f-67f2c4527292" 00:05:20.951 ], 00:05:20.951 "product_name": "Malloc disk", 00:05:20.951 "block_size": 512, 00:05:20.951 "num_blocks": 16384, 00:05:20.951 "uuid": "3c7e62ce-ffa8-4ac7-ac8f-67f2c4527292", 00:05:20.951 "assigned_rate_limits": { 00:05:20.951 "rw_ios_per_sec": 0, 00:05:20.951 "rw_mbytes_per_sec": 0, 00:05:20.951 "r_mbytes_per_sec": 0, 00:05:20.951 "w_mbytes_per_sec": 0 00:05:20.951 }, 00:05:20.951 "claimed": false, 00:05:20.951 "zoned": false, 00:05:20.951 "supported_io_types": { 00:05:20.951 "read": true, 00:05:20.951 "write": true, 00:05:20.951 "unmap": true, 00:05:20.951 "flush": true, 00:05:20.951 "reset": true, 00:05:20.951 "nvme_admin": false, 00:05:20.951 "nvme_io": false, 00:05:20.951 "nvme_io_md": false, 00:05:20.951 "write_zeroes": true, 00:05:20.951 "zcopy": true, 00:05:20.951 "get_zone_info": false, 00:05:20.951 "zone_management": false, 00:05:20.951 "zone_append": false, 00:05:20.951 "compare": false, 00:05:20.951 "compare_and_write": false, 00:05:20.951 "abort": true, 00:05:20.951 "seek_hole": false, 00:05:20.951 "seek_data": false, 00:05:20.951 "copy": true, 00:05:20.951 "nvme_iov_md": false 00:05:20.951 }, 00:05:20.951 "memory_domains": [ 00:05:20.951 { 00:05:20.951 "dma_device_id": "system", 00:05:20.951 "dma_device_type": 1 00:05:20.951 }, 00:05:20.951 { 00:05:20.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.951 "dma_device_type": 2 00:05:20.951 } 00:05:20.951 ], 00:05:20.951 "driver_specific": {} 00:05:20.951 } 00:05:20.951 ]' 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.951 [2024-07-12 14:10:12.885400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:20.951 [2024-07-12 14:10:12.885429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:20.951 [2024-07-12 14:10:12.885441] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaeeac0 00:05:20.951 [2024-07-12 14:10:12.885447] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:20.951 [2024-07-12 14:10:12.886404] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:20.951 [2024-07-12 14:10:12.886425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:20.951 Passthru0 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.951 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:20.951 { 00:05:20.951 "name": "Malloc2", 00:05:20.951 "aliases": [ 00:05:20.951 "3c7e62ce-ffa8-4ac7-ac8f-67f2c4527292" 00:05:20.951 ], 00:05:20.951 "product_name": "Malloc disk", 00:05:20.951 "block_size": 512, 00:05:20.951 "num_blocks": 16384, 00:05:20.951 "uuid": "3c7e62ce-ffa8-4ac7-ac8f-67f2c4527292", 00:05:20.951 "assigned_rate_limits": { 00:05:20.951 "rw_ios_per_sec": 0, 00:05:20.951 "rw_mbytes_per_sec": 0, 00:05:20.951 "r_mbytes_per_sec": 0, 00:05:20.951 "w_mbytes_per_sec": 0 00:05:20.951 }, 00:05:20.951 "claimed": true, 00:05:20.951 "claim_type": "exclusive_write", 00:05:20.951 "zoned": false, 00:05:20.951 "supported_io_types": { 00:05:20.951 "read": true, 00:05:20.951 "write": true, 00:05:20.951 "unmap": true, 00:05:20.951 "flush": true, 00:05:20.951 "reset": true, 00:05:20.951 "nvme_admin": false, 00:05:20.952 "nvme_io": false, 00:05:20.952 "nvme_io_md": false, 00:05:20.952 "write_zeroes": true, 00:05:20.952 "zcopy": true, 00:05:20.952 "get_zone_info": false, 00:05:20.952 "zone_management": false, 00:05:20.952 "zone_append": false, 00:05:20.952 "compare": false, 00:05:20.952 "compare_and_write": false, 00:05:20.952 "abort": true, 00:05:20.952 "seek_hole": false, 00:05:20.952 "seek_data": false, 00:05:20.952 "copy": true, 00:05:20.952 "nvme_iov_md": false 00:05:20.952 }, 00:05:20.952 "memory_domains": [ 00:05:20.952 { 00:05:20.952 "dma_device_id": "system", 00:05:20.952 "dma_device_type": 1 00:05:20.952 }, 00:05:20.952 { 00:05:20.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.952 "dma_device_type": 2 00:05:20.952 } 00:05:20.952 ], 00:05:20.952 "driver_specific": {} 00:05:20.952 }, 00:05:20.952 { 00:05:20.952 "name": "Passthru0", 00:05:20.952 "aliases": [ 00:05:20.952 "993de542-ec45-5082-b799-1706c0a646bf" 00:05:20.952 ], 00:05:20.952 "product_name": "passthru", 00:05:20.952 "block_size": 512, 00:05:20.952 "num_blocks": 16384, 00:05:20.952 "uuid": "993de542-ec45-5082-b799-1706c0a646bf", 00:05:20.952 "assigned_rate_limits": { 00:05:20.952 "rw_ios_per_sec": 0, 00:05:20.952 "rw_mbytes_per_sec": 0, 00:05:20.952 "r_mbytes_per_sec": 0, 00:05:20.952 "w_mbytes_per_sec": 0 00:05:20.952 }, 00:05:20.952 "claimed": false, 00:05:20.952 "zoned": false, 00:05:20.952 "supported_io_types": { 00:05:20.952 "read": true, 00:05:20.952 "write": true, 00:05:20.952 "unmap": true, 00:05:20.952 "flush": true, 00:05:20.952 "reset": true, 00:05:20.952 "nvme_admin": false, 00:05:20.952 "nvme_io": false, 00:05:20.952 "nvme_io_md": false, 00:05:20.952 "write_zeroes": true, 00:05:20.952 "zcopy": true, 00:05:20.952 "get_zone_info": false, 00:05:20.952 "zone_management": false, 00:05:20.952 "zone_append": false, 00:05:20.952 "compare": false, 00:05:20.952 "compare_and_write": false, 00:05:20.952 "abort": true, 00:05:20.952 "seek_hole": false, 00:05:20.952 "seek_data": false, 00:05:20.952 "copy": true, 00:05:20.952 "nvme_iov_md": false 00:05:20.952 }, 00:05:20.952 "memory_domains": [ 00:05:20.952 { 00:05:20.952 "dma_device_id": "system", 00:05:20.952 "dma_device_type": 1 00:05:20.952 }, 00:05:20.952 { 00:05:20.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.952 "dma_device_type": 2 00:05:20.952 } 00:05:20.952 ], 00:05:20.952 "driver_specific": { 00:05:20.952 "passthru": { 00:05:20.952 "name": "Passthru0", 00:05:20.952 "base_bdev_name": "Malloc2" 00:05:20.952 } 00:05:20.952 } 00:05:20.952 } 00:05:20.952 ]' 00:05:20.952 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:20.952 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:20.952 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:20.952 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.952 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.952 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.952 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:20.952 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.952 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.212 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.212 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:21.212 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.212 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.212 14:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.212 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:21.212 14:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:21.212 14:10:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:21.212 00:05:21.212 real 0m0.270s 00:05:21.212 user 0m0.169s 00:05:21.212 sys 0m0.036s 00:05:21.212 14:10:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.212 14:10:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.212 ************************************ 00:05:21.212 END TEST rpc_daemon_integrity 00:05:21.212 ************************************ 00:05:21.212 14:10:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:21.212 14:10:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:21.212 14:10:13 rpc -- rpc/rpc.sh@84 -- # killprocess 2362833 00:05:21.212 14:10:13 rpc -- common/autotest_common.sh@948 -- # '[' -z 2362833 ']' 00:05:21.212 14:10:13 rpc -- common/autotest_common.sh@952 -- # kill -0 2362833 00:05:21.212 14:10:13 rpc -- common/autotest_common.sh@953 -- # uname 00:05:21.212 14:10:13 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.212 14:10:13 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2362833 00:05:21.212 14:10:13 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.212 14:10:13 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.212 14:10:13 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2362833' 00:05:21.212 killing process with pid 2362833 00:05:21.212 14:10:13 rpc -- common/autotest_common.sh@967 -- # kill 2362833 00:05:21.212 14:10:13 rpc -- common/autotest_common.sh@972 -- # wait 2362833 00:05:21.471 00:05:21.471 real 0m2.436s 00:05:21.471 user 0m3.143s 00:05:21.471 sys 0m0.647s 00:05:21.471 14:10:13 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.471 14:10:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.471 ************************************ 00:05:21.471 END TEST rpc 00:05:21.471 ************************************ 00:05:21.471 14:10:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.471 14:10:13 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:21.471 14:10:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.471 14:10:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.471 14:10:13 -- common/autotest_common.sh@10 -- # set +x 00:05:21.471 ************************************ 00:05:21.471 START TEST skip_rpc 00:05:21.471 ************************************ 00:05:21.471 14:10:13 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:21.730 * Looking for test storage... 00:05:21.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:21.730 14:10:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:21.730 14:10:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:21.730 14:10:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:21.730 14:10:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.730 14:10:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.730 14:10:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.730 ************************************ 00:05:21.730 START TEST skip_rpc 00:05:21.730 ************************************ 00:05:21.730 14:10:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:21.730 14:10:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2363464 00:05:21.730 14:10:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:21.730 14:10:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.730 14:10:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:21.730 [2024-07-12 14:10:13.636696] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:05:21.730 [2024-07-12 14:10:13.636738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363464 ] 00:05:21.730 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.730 [2024-07-12 14:10:13.688817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.989 [2024-07-12 14:10:13.761115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2363464 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2363464 ']' 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2363464 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2363464 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2363464' 00:05:27.258 killing process with pid 2363464 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2363464 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2363464 00:05:27.258 00:05:27.258 real 0m5.366s 00:05:27.258 user 0m5.150s 00:05:27.258 sys 0m0.246s 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.258 14:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.258 ************************************ 00:05:27.258 END TEST skip_rpc 00:05:27.258 ************************************ 00:05:27.258 14:10:18 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:27.258 14:10:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:27.258 14:10:18 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.258 14:10:18 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.258 14:10:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.258 ************************************ 00:05:27.258 START TEST skip_rpc_with_json 00:05:27.258 ************************************ 00:05:27.258 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:27.258 14:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:27.258 14:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2364412 00:05:27.258 14:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.259 14:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.259 14:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2364412 00:05:27.259 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2364412 ']' 00:05:27.259 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.259 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.259 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.259 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.259 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.259 [2024-07-12 14:10:19.076974] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:05:27.259 [2024-07-12 14:10:19.077018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364412 ] 00:05:27.259 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.259 [2024-07-12 14:10:19.129451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.259 [2024-07-12 14:10:19.197570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.194 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.194 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:28.194 14:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:28.194 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.194 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.194 [2024-07-12 14:10:19.877154] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:28.194 request: 00:05:28.194 { 00:05:28.194 "trtype": "tcp", 00:05:28.194 "method": "nvmf_get_transports", 00:05:28.194 "req_id": 1 00:05:28.194 } 00:05:28.194 Got JSON-RPC error response 00:05:28.194 response: 00:05:28.194 { 00:05:28.194 "code": -19, 00:05:28.194 "message": "No such device" 00:05:28.194 } 00:05:28.194 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:28.194 14:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:28.194 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.194 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.194 [2024-07-12 14:10:19.885245] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.194 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.194 14:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:28.194 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.194 14:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.194 14:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.194 14:10:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:28.194 { 00:05:28.194 "subsystems": [ 00:05:28.194 { 00:05:28.194 "subsystem": "vfio_user_target", 00:05:28.194 "config": null 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "subsystem": "keyring", 00:05:28.194 "config": [] 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "subsystem": "iobuf", 00:05:28.194 "config": [ 00:05:28.194 { 00:05:28.194 "method": "iobuf_set_options", 00:05:28.194 "params": { 00:05:28.194 "small_pool_count": 8192, 00:05:28.194 "large_pool_count": 1024, 00:05:28.194 "small_bufsize": 8192, 00:05:28.194 "large_bufsize": 135168 00:05:28.194 } 00:05:28.194 } 00:05:28.194 ] 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "subsystem": "sock", 00:05:28.194 "config": [ 00:05:28.194 { 00:05:28.194 "method": "sock_set_default_impl", 00:05:28.194 "params": { 00:05:28.194 "impl_name": "posix" 00:05:28.194 } 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "method": "sock_impl_set_options", 00:05:28.194 "params": { 00:05:28.194 "impl_name": "ssl", 00:05:28.194 "recv_buf_size": 4096, 00:05:28.194 "send_buf_size": 4096, 00:05:28.194 "enable_recv_pipe": true, 00:05:28.194 "enable_quickack": false, 00:05:28.194 "enable_placement_id": 0, 00:05:28.194 "enable_zerocopy_send_server": true, 00:05:28.194 "enable_zerocopy_send_client": false, 00:05:28.194 "zerocopy_threshold": 0, 00:05:28.194 "tls_version": 0, 00:05:28.194 "enable_ktls": false 00:05:28.194 } 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "method": "sock_impl_set_options", 00:05:28.194 "params": { 00:05:28.194 "impl_name": "posix", 00:05:28.194 "recv_buf_size": 2097152, 00:05:28.194 "send_buf_size": 2097152, 00:05:28.194 "enable_recv_pipe": true, 00:05:28.194 "enable_quickack": false, 00:05:28.194 "enable_placement_id": 0, 00:05:28.194 "enable_zerocopy_send_server": true, 00:05:28.194 "enable_zerocopy_send_client": false, 00:05:28.194 "zerocopy_threshold": 0, 00:05:28.194 "tls_version": 0, 00:05:28.194 "enable_ktls": false 00:05:28.194 } 00:05:28.194 } 00:05:28.194 ] 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "subsystem": "vmd", 00:05:28.194 "config": [] 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "subsystem": "accel", 00:05:28.194 "config": [ 00:05:28.194 { 00:05:28.194 "method": "accel_set_options", 00:05:28.194 "params": { 00:05:28.194 "small_cache_size": 128, 00:05:28.194 "large_cache_size": 16, 00:05:28.194 "task_count": 2048, 00:05:28.194 "sequence_count": 2048, 00:05:28.194 "buf_count": 2048 00:05:28.194 } 00:05:28.194 } 00:05:28.194 ] 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "subsystem": "bdev", 00:05:28.194 "config": [ 00:05:28.194 { 00:05:28.194 "method": "bdev_set_options", 00:05:28.194 "params": { 00:05:28.194 "bdev_io_pool_size": 65535, 00:05:28.194 "bdev_io_cache_size": 256, 00:05:28.194 "bdev_auto_examine": true, 00:05:28.194 "iobuf_small_cache_size": 128, 00:05:28.194 "iobuf_large_cache_size": 16 00:05:28.194 } 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "method": "bdev_raid_set_options", 00:05:28.194 "params": { 00:05:28.194 "process_window_size_kb": 1024 00:05:28.194 } 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "method": "bdev_iscsi_set_options", 00:05:28.194 "params": { 00:05:28.194 "timeout_sec": 30 00:05:28.194 } 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "method": "bdev_nvme_set_options", 00:05:28.194 "params": { 00:05:28.194 "action_on_timeout": "none", 00:05:28.194 "timeout_us": 0, 00:05:28.194 "timeout_admin_us": 0, 00:05:28.194 "keep_alive_timeout_ms": 10000, 00:05:28.194 "arbitration_burst": 0, 00:05:28.194 "low_priority_weight": 0, 00:05:28.194 "medium_priority_weight": 0, 00:05:28.194 "high_priority_weight": 0, 00:05:28.194 "nvme_adminq_poll_period_us": 10000, 00:05:28.194 "nvme_ioq_poll_period_us": 0, 00:05:28.194 "io_queue_requests": 0, 00:05:28.194 "delay_cmd_submit": true, 00:05:28.194 "transport_retry_count": 4, 00:05:28.194 "bdev_retry_count": 3, 00:05:28.194 "transport_ack_timeout": 0, 00:05:28.194 "ctrlr_loss_timeout_sec": 0, 00:05:28.194 "reconnect_delay_sec": 0, 00:05:28.194 "fast_io_fail_timeout_sec": 0, 00:05:28.194 "disable_auto_failback": false, 00:05:28.194 "generate_uuids": false, 00:05:28.194 "transport_tos": 0, 00:05:28.194 "nvme_error_stat": false, 00:05:28.194 "rdma_srq_size": 0, 00:05:28.194 "io_path_stat": false, 00:05:28.194 "allow_accel_sequence": false, 00:05:28.194 "rdma_max_cq_size": 0, 00:05:28.194 "rdma_cm_event_timeout_ms": 0, 00:05:28.194 "dhchap_digests": [ 00:05:28.194 "sha256", 00:05:28.194 "sha384", 00:05:28.194 "sha512" 00:05:28.194 ], 00:05:28.194 "dhchap_dhgroups": [ 00:05:28.194 "null", 00:05:28.194 "ffdhe2048", 00:05:28.194 "ffdhe3072", 00:05:28.194 "ffdhe4096", 00:05:28.194 "ffdhe6144", 00:05:28.194 "ffdhe8192" 00:05:28.194 ] 00:05:28.194 } 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "method": "bdev_nvme_set_hotplug", 00:05:28.194 "params": { 00:05:28.194 "period_us": 100000, 00:05:28.194 "enable": false 00:05:28.194 } 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "method": "bdev_wait_for_examine" 00:05:28.194 } 00:05:28.194 ] 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "subsystem": "scsi", 00:05:28.194 "config": null 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "subsystem": "scheduler", 00:05:28.194 "config": [ 00:05:28.194 { 00:05:28.194 "method": "framework_set_scheduler", 00:05:28.194 "params": { 00:05:28.194 "name": "static" 00:05:28.194 } 00:05:28.194 } 00:05:28.194 ] 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "subsystem": "vhost_scsi", 00:05:28.194 "config": [] 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "subsystem": "vhost_blk", 00:05:28.194 "config": [] 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "subsystem": "ublk", 00:05:28.194 "config": [] 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "subsystem": "nbd", 00:05:28.194 "config": [] 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "subsystem": "nvmf", 00:05:28.194 "config": [ 00:05:28.194 { 00:05:28.194 "method": "nvmf_set_config", 00:05:28.194 "params": { 00:05:28.194 "discovery_filter": "match_any", 00:05:28.194 "admin_cmd_passthru": { 00:05:28.194 "identify_ctrlr": false 00:05:28.194 } 00:05:28.194 } 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "method": "nvmf_set_max_subsystems", 00:05:28.194 "params": { 00:05:28.194 "max_subsystems": 1024 00:05:28.194 } 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "method": "nvmf_set_crdt", 00:05:28.194 "params": { 00:05:28.194 "crdt1": 0, 00:05:28.194 "crdt2": 0, 00:05:28.194 "crdt3": 0 00:05:28.194 } 00:05:28.194 }, 00:05:28.194 { 00:05:28.194 "method": "nvmf_create_transport", 00:05:28.194 "params": { 00:05:28.194 "trtype": "TCP", 00:05:28.194 "max_queue_depth": 128, 00:05:28.194 "max_io_qpairs_per_ctrlr": 127, 00:05:28.194 "in_capsule_data_size": 4096, 00:05:28.194 "max_io_size": 131072, 00:05:28.194 "io_unit_size": 131072, 00:05:28.194 "max_aq_depth": 128, 00:05:28.194 "num_shared_buffers": 511, 00:05:28.194 "buf_cache_size": 4294967295, 00:05:28.194 "dif_insert_or_strip": false, 00:05:28.194 "zcopy": false, 00:05:28.194 "c2h_success": true, 00:05:28.194 "sock_priority": 0, 00:05:28.194 "abort_timeout_sec": 1, 00:05:28.194 "ack_timeout": 0, 00:05:28.194 "data_wr_pool_size": 0 00:05:28.194 } 00:05:28.194 } 00:05:28.195 ] 00:05:28.195 }, 00:05:28.195 { 00:05:28.195 "subsystem": "iscsi", 00:05:28.195 "config": [ 00:05:28.195 { 00:05:28.195 "method": "iscsi_set_options", 00:05:28.195 "params": { 00:05:28.195 "node_base": "iqn.2016-06.io.spdk", 00:05:28.195 "max_sessions": 128, 00:05:28.195 "max_connections_per_session": 2, 00:05:28.195 "max_queue_depth": 64, 00:05:28.195 "default_time2wait": 2, 00:05:28.195 "default_time2retain": 20, 00:05:28.195 "first_burst_length": 8192, 00:05:28.195 "immediate_data": true, 00:05:28.195 "allow_duplicated_isid": false, 00:05:28.195 "error_recovery_level": 0, 00:05:28.195 "nop_timeout": 60, 00:05:28.195 "nop_in_interval": 30, 00:05:28.195 "disable_chap": false, 00:05:28.195 "require_chap": false, 00:05:28.195 "mutual_chap": false, 00:05:28.195 "chap_group": 0, 00:05:28.195 "max_large_datain_per_connection": 64, 00:05:28.195 "max_r2t_per_connection": 4, 00:05:28.195 "pdu_pool_size": 36864, 00:05:28.195 "immediate_data_pool_size": 16384, 00:05:28.195 "data_out_pool_size": 2048 00:05:28.195 } 00:05:28.195 } 00:05:28.195 ] 00:05:28.195 } 00:05:28.195 ] 00:05:28.195 } 00:05:28.195 14:10:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:28.195 14:10:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2364412 00:05:28.195 14:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2364412 ']' 00:05:28.195 14:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2364412 00:05:28.195 14:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:28.195 14:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.195 14:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2364412 00:05:28.195 14:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.195 14:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.195 14:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2364412' 00:05:28.195 killing process with pid 2364412 00:05:28.195 14:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2364412 00:05:28.195 14:10:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2364412 00:05:28.469 14:10:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:28.469 14:10:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2364650 00:05:28.469 14:10:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:33.765 14:10:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2364650 00:05:33.765 14:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2364650 ']' 00:05:33.765 14:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2364650 00:05:33.765 14:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:33.765 14:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.765 14:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2364650 00:05:33.765 14:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:33.765 14:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:33.765 14:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2364650' 00:05:33.765 killing process with pid 2364650 00:05:33.765 14:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2364650 00:05:33.765 14:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2364650 00:05:33.765 14:10:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:33.765 14:10:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:33.765 00:05:33.765 real 0m6.730s 00:05:33.765 user 0m6.570s 00:05:33.765 sys 0m0.559s 00:05:33.765 14:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.765 14:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.765 ************************************ 00:05:33.765 END TEST skip_rpc_with_json 00:05:33.765 ************************************ 00:05:34.026 14:10:25 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:34.026 14:10:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:34.026 14:10:25 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.026 14:10:25 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.026 14:10:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.026 ************************************ 00:05:34.026 START TEST skip_rpc_with_delay 00:05:34.026 ************************************ 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:34.026 [2024-07-12 14:10:25.872635] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:34.026 [2024-07-12 14:10:25.872697] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.026 00:05:34.026 real 0m0.065s 00:05:34.026 user 0m0.046s 00:05:34.026 sys 0m0.018s 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.026 14:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:34.026 ************************************ 00:05:34.026 END TEST skip_rpc_with_delay 00:05:34.026 ************************************ 00:05:34.026 14:10:25 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:34.026 14:10:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:34.026 14:10:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:34.026 14:10:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:34.026 14:10:25 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.026 14:10:25 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.026 14:10:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.026 ************************************ 00:05:34.026 START TEST exit_on_failed_rpc_init 00:05:34.026 ************************************ 00:05:34.026 14:10:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:34.026 14:10:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2365621 00:05:34.026 14:10:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2365621 00:05:34.026 14:10:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.026 14:10:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2365621 ']' 00:05:34.026 14:10:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.026 14:10:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.026 14:10:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.026 14:10:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.026 14:10:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:34.026 [2024-07-12 14:10:26.004266] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:05:34.026 [2024-07-12 14:10:26.004307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365621 ] 00:05:34.026 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.296 [2024-07-12 14:10:26.058562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.296 [2024-07-12 14:10:26.130591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.864 14:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.864 14:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:34.864 14:10:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.864 14:10:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.864 14:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:34.864 14:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.864 14:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.864 14:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.864 14:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.864 14:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.864 14:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.864 14:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.864 14:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.864 14:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:34.864 14:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.864 [2024-07-12 14:10:26.863605] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:05:34.864 [2024-07-12 14:10:26.863654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365834 ] 00:05:35.123 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.123 [2024-07-12 14:10:26.916919] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.123 [2024-07-12 14:10:26.990493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.123 [2024-07-12 14:10:26.990560] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:35.123 [2024-07-12 14:10:26.990569] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:35.123 [2024-07-12 14:10:26.990575] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2365621 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2365621 ']' 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2365621 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2365621 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2365621' 00:05:35.123 killing process with pid 2365621 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2365621 00:05:35.123 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2365621 00:05:35.691 00:05:35.691 real 0m1.468s 00:05:35.691 user 0m1.688s 00:05:35.691 sys 0m0.408s 00:05:35.691 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.691 14:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:35.691 ************************************ 00:05:35.691 END TEST exit_on_failed_rpc_init 00:05:35.691 ************************************ 00:05:35.691 14:10:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:35.691 14:10:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:35.691 00:05:35.691 real 0m13.986s 00:05:35.691 user 0m13.606s 00:05:35.691 sys 0m1.464s 00:05:35.691 14:10:27 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.691 14:10:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.691 ************************************ 00:05:35.691 END TEST skip_rpc 00:05:35.691 ************************************ 00:05:35.691 14:10:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:35.691 14:10:27 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:35.691 14:10:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.691 14:10:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.691 14:10:27 -- common/autotest_common.sh@10 -- # set +x 00:05:35.691 ************************************ 00:05:35.691 START TEST rpc_client 00:05:35.691 ************************************ 00:05:35.691 14:10:27 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:35.691 * Looking for test storage... 00:05:35.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:35.691 14:10:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:35.691 OK 00:05:35.691 14:10:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:35.691 00:05:35.691 real 0m0.111s 00:05:35.691 user 0m0.052s 00:05:35.691 sys 0m0.067s 00:05:35.691 14:10:27 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.691 14:10:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:35.691 ************************************ 00:05:35.691 END TEST rpc_client 00:05:35.691 ************************************ 00:05:35.691 14:10:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:35.691 14:10:27 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:35.691 14:10:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.691 14:10:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.691 14:10:27 -- common/autotest_common.sh@10 -- # set +x 00:05:35.691 ************************************ 00:05:35.691 START TEST json_config 00:05:35.691 ************************************ 00:05:35.691 14:10:27 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:35.951 14:10:27 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.951 14:10:27 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.951 14:10:27 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.951 14:10:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.951 14:10:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.951 14:10:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.951 14:10:27 json_config -- paths/export.sh@5 -- # export PATH 00:05:35.951 14:10:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@47 -- # : 0 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:35.951 14:10:27 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:35.951 INFO: JSON configuration test init 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:35.951 14:10:27 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.951 14:10:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:35.951 14:10:27 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.951 14:10:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.951 14:10:27 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:35.951 14:10:27 json_config -- json_config/common.sh@9 -- # local app=target 00:05:35.951 14:10:27 json_config -- json_config/common.sh@10 -- # shift 00:05:35.951 14:10:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.951 14:10:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.951 14:10:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.951 14:10:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.951 14:10:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.951 14:10:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2365981 00:05:35.951 14:10:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.951 Waiting for target to run... 00:05:35.951 14:10:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:35.951 14:10:27 json_config -- json_config/common.sh@25 -- # waitforlisten 2365981 /var/tmp/spdk_tgt.sock 00:05:35.951 14:10:27 json_config -- common/autotest_common.sh@829 -- # '[' -z 2365981 ']' 00:05:35.951 14:10:27 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.951 14:10:27 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.951 14:10:27 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.951 14:10:27 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.951 14:10:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.951 [2024-07-12 14:10:27.821953] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:05:35.951 [2024-07-12 14:10:27.821999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365981 ] 00:05:35.951 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.210 [2024-07-12 14:10:28.090810] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.210 [2024-07-12 14:10:28.158759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.777 14:10:28 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.777 14:10:28 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:36.777 14:10:28 json_config -- json_config/common.sh@26 -- # echo '' 00:05:36.777 00:05:36.777 14:10:28 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:36.777 14:10:28 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:36.777 14:10:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:36.777 14:10:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.777 14:10:28 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:36.777 14:10:28 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:36.777 14:10:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:36.777 14:10:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.777 14:10:28 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:36.777 14:10:28 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:36.777 14:10:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:40.067 14:10:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:40.067 14:10:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:40.067 14:10:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:40.067 14:10:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:40.067 14:10:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:40.067 14:10:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:40.067 14:10:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:40.067 14:10:31 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:40.067 14:10:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:40.326 MallocForNvmf0 00:05:40.326 14:10:32 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:40.326 14:10:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:40.326 MallocForNvmf1 00:05:40.326 14:10:32 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:40.326 14:10:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:40.585 [2024-07-12 14:10:32.460884] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.585 14:10:32 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:40.585 14:10:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:40.843 14:10:32 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:40.844 14:10:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:40.844 14:10:32 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:40.844 14:10:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:41.108 14:10:32 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:41.108 14:10:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:41.373 [2024-07-12 14:10:33.147024] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:41.373 14:10:33 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:41.373 14:10:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.373 14:10:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.373 14:10:33 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:41.373 14:10:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.373 14:10:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.373 14:10:33 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:41.373 14:10:33 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:41.373 14:10:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:41.632 MallocBdevForConfigChangeCheck 00:05:41.632 14:10:33 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:41.632 14:10:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.632 14:10:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.632 14:10:33 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:41.632 14:10:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.891 14:10:33 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:41.891 INFO: shutting down applications... 00:05:41.891 14:10:33 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:41.891 14:10:33 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:41.891 14:10:33 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:41.891 14:10:33 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:43.795 Calling clear_iscsi_subsystem 00:05:43.796 Calling clear_nvmf_subsystem 00:05:43.796 Calling clear_nbd_subsystem 00:05:43.796 Calling clear_ublk_subsystem 00:05:43.796 Calling clear_vhost_blk_subsystem 00:05:43.796 Calling clear_vhost_scsi_subsystem 00:05:43.796 Calling clear_bdev_subsystem 00:05:43.796 14:10:35 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:43.796 14:10:35 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:43.796 14:10:35 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:43.796 14:10:35 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.796 14:10:35 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:43.796 14:10:35 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:43.796 14:10:35 json_config -- json_config/json_config.sh@345 -- # break 00:05:43.796 14:10:35 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:43.796 14:10:35 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:43.796 14:10:35 json_config -- json_config/common.sh@31 -- # local app=target 00:05:43.796 14:10:35 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:43.796 14:10:35 json_config -- json_config/common.sh@35 -- # [[ -n 2365981 ]] 00:05:43.796 14:10:35 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2365981 00:05:43.796 14:10:35 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:43.796 14:10:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.796 14:10:35 json_config -- json_config/common.sh@41 -- # kill -0 2365981 00:05:43.796 14:10:35 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:44.370 14:10:36 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:44.370 14:10:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.370 14:10:36 json_config -- json_config/common.sh@41 -- # kill -0 2365981 00:05:44.370 14:10:36 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:44.370 14:10:36 json_config -- json_config/common.sh@43 -- # break 00:05:44.370 14:10:36 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:44.370 14:10:36 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:44.370 SPDK target shutdown done 00:05:44.370 14:10:36 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:44.370 INFO: relaunching applications... 00:05:44.370 14:10:36 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.370 14:10:36 json_config -- json_config/common.sh@9 -- # local app=target 00:05:44.370 14:10:36 json_config -- json_config/common.sh@10 -- # shift 00:05:44.370 14:10:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:44.370 14:10:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:44.370 14:10:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:44.370 14:10:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.370 14:10:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.370 14:10:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2367566 00:05:44.370 14:10:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:44.370 Waiting for target to run... 00:05:44.370 14:10:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.370 14:10:36 json_config -- json_config/common.sh@25 -- # waitforlisten 2367566 /var/tmp/spdk_tgt.sock 00:05:44.370 14:10:36 json_config -- common/autotest_common.sh@829 -- # '[' -z 2367566 ']' 00:05:44.370 14:10:36 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:44.370 14:10:36 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.370 14:10:36 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:44.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:44.370 14:10:36 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.370 14:10:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.370 [2024-07-12 14:10:36.169463] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:05:44.370 [2024-07-12 14:10:36.169519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367566 ] 00:05:44.370 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.629 [2024-07-12 14:10:36.601341] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.889 [2024-07-12 14:10:36.693212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.178 [2024-07-12 14:10:39.702906] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.178 [2024-07-12 14:10:39.735216] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:48.438 14:10:40 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.438 14:10:40 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:48.438 14:10:40 json_config -- json_config/common.sh@26 -- # echo '' 00:05:48.438 00:05:48.438 14:10:40 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:48.438 14:10:40 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:48.438 INFO: Checking if target configuration is the same... 00:05:48.438 14:10:40 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.438 14:10:40 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:48.438 14:10:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.438 + '[' 2 -ne 2 ']' 00:05:48.438 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:48.438 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:48.438 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:48.438 +++ basename /dev/fd/62 00:05:48.438 ++ mktemp /tmp/62.XXX 00:05:48.438 + tmp_file_1=/tmp/62.1jR 00:05:48.438 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.438 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:48.438 + tmp_file_2=/tmp/spdk_tgt_config.json.f0P 00:05:48.438 + ret=0 00:05:48.438 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:48.697 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:48.697 + diff -u /tmp/62.1jR /tmp/spdk_tgt_config.json.f0P 00:05:48.697 + echo 'INFO: JSON config files are the same' 00:05:48.697 INFO: JSON config files are the same 00:05:48.697 + rm /tmp/62.1jR /tmp/spdk_tgt_config.json.f0P 00:05:48.697 + exit 0 00:05:48.697 14:10:40 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:48.697 14:10:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:48.697 INFO: changing configuration and checking if this can be detected... 00:05:48.697 14:10:40 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:48.697 14:10:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:48.957 14:10:40 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:48.957 14:10:40 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.957 14:10:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.957 + '[' 2 -ne 2 ']' 00:05:48.957 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:48.957 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:48.957 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:48.957 +++ basename /dev/fd/62 00:05:48.957 ++ mktemp /tmp/62.XXX 00:05:48.957 + tmp_file_1=/tmp/62.scy 00:05:48.957 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.957 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:48.957 + tmp_file_2=/tmp/spdk_tgt_config.json.aku 00:05:48.957 + ret=0 00:05:48.957 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:49.216 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:49.216 + diff -u /tmp/62.scy /tmp/spdk_tgt_config.json.aku 00:05:49.216 + ret=1 00:05:49.216 + echo '=== Start of file: /tmp/62.scy ===' 00:05:49.216 + cat /tmp/62.scy 00:05:49.216 + echo '=== End of file: /tmp/62.scy ===' 00:05:49.216 + echo '' 00:05:49.216 + echo '=== Start of file: /tmp/spdk_tgt_config.json.aku ===' 00:05:49.216 + cat /tmp/spdk_tgt_config.json.aku 00:05:49.216 + echo '=== End of file: /tmp/spdk_tgt_config.json.aku ===' 00:05:49.216 + echo '' 00:05:49.216 + rm /tmp/62.scy /tmp/spdk_tgt_config.json.aku 00:05:49.216 + exit 1 00:05:49.216 14:10:41 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:49.216 INFO: configuration change detected. 00:05:49.216 14:10:41 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:49.216 14:10:41 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:49.216 14:10:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.216 14:10:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.216 14:10:41 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:49.216 14:10:41 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:49.216 14:10:41 json_config -- json_config/json_config.sh@317 -- # [[ -n 2367566 ]] 00:05:49.216 14:10:41 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:49.216 14:10:41 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:49.216 14:10:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.216 14:10:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.474 14:10:41 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:49.474 14:10:41 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:49.474 14:10:41 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:49.474 14:10:41 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:49.474 14:10:41 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:49.474 14:10:41 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:49.474 14:10:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:49.474 14:10:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.474 14:10:41 json_config -- json_config/json_config.sh@323 -- # killprocess 2367566 00:05:49.474 14:10:41 json_config -- common/autotest_common.sh@948 -- # '[' -z 2367566 ']' 00:05:49.474 14:10:41 json_config -- common/autotest_common.sh@952 -- # kill -0 2367566 00:05:49.474 14:10:41 json_config -- common/autotest_common.sh@953 -- # uname 00:05:49.474 14:10:41 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.474 14:10:41 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2367566 00:05:49.474 14:10:41 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.474 14:10:41 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.474 14:10:41 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2367566' 00:05:49.474 killing process with pid 2367566 00:05:49.474 14:10:41 json_config -- common/autotest_common.sh@967 -- # kill 2367566 00:05:49.474 14:10:41 json_config -- common/autotest_common.sh@972 -- # wait 2367566 00:05:50.852 14:10:42 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.852 14:10:42 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:50.852 14:10:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.852 14:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.852 14:10:42 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:50.852 14:10:42 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:50.852 INFO: Success 00:05:50.852 00:05:50.852 real 0m15.137s 00:05:50.852 user 0m15.884s 00:05:50.852 sys 0m1.890s 00:05:50.852 14:10:42 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.852 14:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.852 ************************************ 00:05:50.852 END TEST json_config 00:05:50.852 ************************************ 00:05:50.852 14:10:42 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.852 14:10:42 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:50.852 14:10:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.852 14:10:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.852 14:10:42 -- common/autotest_common.sh@10 -- # set +x 00:05:51.111 ************************************ 00:05:51.111 START TEST json_config_extra_key 00:05:51.111 ************************************ 00:05:51.111 14:10:42 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:51.111 14:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.111 14:10:42 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:51.111 14:10:42 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.111 14:10:42 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.111 14:10:42 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.111 14:10:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.112 14:10:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.112 14:10:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.112 14:10:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:51.112 14:10:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.112 14:10:42 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:51.112 14:10:42 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:51.112 14:10:42 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:51.112 14:10:42 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.112 14:10:42 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.112 14:10:42 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.112 14:10:42 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:51.112 14:10:42 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:51.112 14:10:42 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:51.112 14:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:51.112 14:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:51.112 14:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:51.112 14:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:51.112 14:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:51.112 14:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:51.112 14:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:51.112 14:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:51.112 14:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:51.112 14:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:51.112 14:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:51.112 INFO: launching applications... 00:05:51.112 14:10:42 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:51.112 14:10:42 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:51.112 14:10:42 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:51.112 14:10:42 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:51.112 14:10:42 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:51.112 14:10:42 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:51.112 14:10:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.112 14:10:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.112 14:10:42 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2368871 00:05:51.112 14:10:42 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:51.112 Waiting for target to run... 00:05:51.112 14:10:42 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2368871 /var/tmp/spdk_tgt.sock 00:05:51.112 14:10:42 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2368871 ']' 00:05:51.112 14:10:42 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.112 14:10:42 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:51.112 14:10:42 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.112 14:10:42 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.112 14:10:42 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.112 14:10:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:51.112 [2024-07-12 14:10:43.028285] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:05:51.112 [2024-07-12 14:10:43.028337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2368871 ] 00:05:51.112 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.370 [2024-07-12 14:10:43.287804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.370 [2024-07-12 14:10:43.356082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.945 14:10:43 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.945 14:10:43 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:51.945 14:10:43 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:51.945 00:05:51.945 14:10:43 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:51.945 INFO: shutting down applications... 00:05:51.945 14:10:43 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:51.945 14:10:43 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:51.945 14:10:43 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:51.945 14:10:43 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2368871 ]] 00:05:51.945 14:10:43 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2368871 00:05:51.945 14:10:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:51.945 14:10:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.945 14:10:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2368871 00:05:51.945 14:10:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.567 14:10:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.567 14:10:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.567 14:10:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2368871 00:05:52.567 14:10:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:52.567 14:10:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:52.567 14:10:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:52.567 14:10:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:52.567 SPDK target shutdown done 00:05:52.567 14:10:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:52.567 Success 00:05:52.567 00:05:52.567 real 0m1.449s 00:05:52.567 user 0m1.254s 00:05:52.567 sys 0m0.358s 00:05:52.567 14:10:44 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.567 14:10:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:52.567 ************************************ 00:05:52.567 END TEST json_config_extra_key 00:05:52.567 ************************************ 00:05:52.567 14:10:44 -- common/autotest_common.sh@1142 -- # return 0 00:05:52.567 14:10:44 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:52.567 14:10:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.567 14:10:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.567 14:10:44 -- common/autotest_common.sh@10 -- # set +x 00:05:52.567 ************************************ 00:05:52.567 START TEST alias_rpc 00:05:52.567 ************************************ 00:05:52.567 14:10:44 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:52.567 * Looking for test storage... 00:05:52.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:52.568 14:10:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:52.568 14:10:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2369253 00:05:52.568 14:10:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.568 14:10:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2369253 00:05:52.568 14:10:44 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2369253 ']' 00:05:52.568 14:10:44 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.568 14:10:44 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.568 14:10:44 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.568 14:10:44 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.568 14:10:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.568 [2024-07-12 14:10:44.515518] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:05:52.568 [2024-07-12 14:10:44.515564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369253 ] 00:05:52.568 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.568 [2024-07-12 14:10:44.567376] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.826 [2024-07-12 14:10:44.641682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.395 14:10:45 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.395 14:10:45 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:53.395 14:10:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:53.654 14:10:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2369253 00:05:53.654 14:10:45 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2369253 ']' 00:05:53.654 14:10:45 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2369253 00:05:53.654 14:10:45 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:53.654 14:10:45 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.654 14:10:45 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2369253 00:05:53.654 14:10:45 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.654 14:10:45 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.654 14:10:45 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2369253' 00:05:53.654 killing process with pid 2369253 00:05:53.654 14:10:45 alias_rpc -- common/autotest_common.sh@967 -- # kill 2369253 00:05:53.654 14:10:45 alias_rpc -- common/autotest_common.sh@972 -- # wait 2369253 00:05:53.912 00:05:53.912 real 0m1.480s 00:05:53.912 user 0m1.643s 00:05:53.912 sys 0m0.372s 00:05:53.912 14:10:45 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.912 14:10:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.913 ************************************ 00:05:53.913 END TEST alias_rpc 00:05:53.913 ************************************ 00:05:53.913 14:10:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:53.913 14:10:45 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:53.913 14:10:45 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:53.913 14:10:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.913 14:10:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.913 14:10:45 -- common/autotest_common.sh@10 -- # set +x 00:05:54.171 ************************************ 00:05:54.171 START TEST spdkcli_tcp 00:05:54.171 ************************************ 00:05:54.171 14:10:45 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:54.171 * Looking for test storage... 00:05:54.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:54.171 14:10:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:54.171 14:10:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:54.171 14:10:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:54.171 14:10:46 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:54.171 14:10:46 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:54.171 14:10:46 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:54.171 14:10:46 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:54.171 14:10:46 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:54.171 14:10:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.171 14:10:46 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2369535 00:05:54.171 14:10:46 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2369535 00:05:54.171 14:10:46 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:54.171 14:10:46 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2369535 ']' 00:05:54.171 14:10:46 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.171 14:10:46 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.171 14:10:46 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.171 14:10:46 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.171 14:10:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.171 [2024-07-12 14:10:46.059250] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:05:54.171 [2024-07-12 14:10:46.059299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369535 ] 00:05:54.171 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.171 [2024-07-12 14:10:46.112305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.430 [2024-07-12 14:10:46.186860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.430 [2024-07-12 14:10:46.186862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.998 14:10:46 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.998 14:10:46 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:54.998 14:10:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2369563 00:05:54.998 14:10:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:54.998 14:10:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:55.257 [ 00:05:55.257 "bdev_malloc_delete", 00:05:55.257 "bdev_malloc_create", 00:05:55.257 "bdev_null_resize", 00:05:55.257 "bdev_null_delete", 00:05:55.257 "bdev_null_create", 00:05:55.257 "bdev_nvme_cuse_unregister", 00:05:55.257 "bdev_nvme_cuse_register", 00:05:55.257 "bdev_opal_new_user", 00:05:55.257 "bdev_opal_set_lock_state", 00:05:55.257 "bdev_opal_delete", 00:05:55.257 "bdev_opal_get_info", 00:05:55.257 "bdev_opal_create", 00:05:55.257 "bdev_nvme_opal_revert", 00:05:55.257 "bdev_nvme_opal_init", 00:05:55.257 "bdev_nvme_send_cmd", 00:05:55.257 "bdev_nvme_get_path_iostat", 00:05:55.257 "bdev_nvme_get_mdns_discovery_info", 00:05:55.258 "bdev_nvme_stop_mdns_discovery", 00:05:55.258 "bdev_nvme_start_mdns_discovery", 00:05:55.258 "bdev_nvme_set_multipath_policy", 00:05:55.258 "bdev_nvme_set_preferred_path", 00:05:55.258 "bdev_nvme_get_io_paths", 00:05:55.258 "bdev_nvme_remove_error_injection", 00:05:55.258 "bdev_nvme_add_error_injection", 00:05:55.258 "bdev_nvme_get_discovery_info", 00:05:55.258 "bdev_nvme_stop_discovery", 00:05:55.258 "bdev_nvme_start_discovery", 00:05:55.258 "bdev_nvme_get_controller_health_info", 00:05:55.258 "bdev_nvme_disable_controller", 00:05:55.258 "bdev_nvme_enable_controller", 00:05:55.258 "bdev_nvme_reset_controller", 00:05:55.258 "bdev_nvme_get_transport_statistics", 00:05:55.258 "bdev_nvme_apply_firmware", 00:05:55.258 "bdev_nvme_detach_controller", 00:05:55.258 "bdev_nvme_get_controllers", 00:05:55.258 "bdev_nvme_attach_controller", 00:05:55.258 "bdev_nvme_set_hotplug", 00:05:55.258 "bdev_nvme_set_options", 00:05:55.258 "bdev_passthru_delete", 00:05:55.258 "bdev_passthru_create", 00:05:55.258 "bdev_lvol_set_parent_bdev", 00:05:55.258 "bdev_lvol_set_parent", 00:05:55.258 "bdev_lvol_check_shallow_copy", 00:05:55.258 "bdev_lvol_start_shallow_copy", 00:05:55.258 "bdev_lvol_grow_lvstore", 00:05:55.258 "bdev_lvol_get_lvols", 00:05:55.258 "bdev_lvol_get_lvstores", 00:05:55.258 "bdev_lvol_delete", 00:05:55.258 "bdev_lvol_set_read_only", 00:05:55.258 "bdev_lvol_resize", 00:05:55.258 "bdev_lvol_decouple_parent", 00:05:55.258 "bdev_lvol_inflate", 00:05:55.258 "bdev_lvol_rename", 00:05:55.258 "bdev_lvol_clone_bdev", 00:05:55.258 "bdev_lvol_clone", 00:05:55.258 "bdev_lvol_snapshot", 00:05:55.258 "bdev_lvol_create", 00:05:55.258 "bdev_lvol_delete_lvstore", 00:05:55.258 "bdev_lvol_rename_lvstore", 00:05:55.258 "bdev_lvol_create_lvstore", 00:05:55.258 "bdev_raid_set_options", 00:05:55.258 "bdev_raid_remove_base_bdev", 00:05:55.258 "bdev_raid_add_base_bdev", 00:05:55.258 "bdev_raid_delete", 00:05:55.258 "bdev_raid_create", 00:05:55.258 "bdev_raid_get_bdevs", 00:05:55.258 "bdev_error_inject_error", 00:05:55.258 "bdev_error_delete", 00:05:55.258 "bdev_error_create", 00:05:55.258 "bdev_split_delete", 00:05:55.258 "bdev_split_create", 00:05:55.258 "bdev_delay_delete", 00:05:55.258 "bdev_delay_create", 00:05:55.258 "bdev_delay_update_latency", 00:05:55.258 "bdev_zone_block_delete", 00:05:55.258 "bdev_zone_block_create", 00:05:55.258 "blobfs_create", 00:05:55.258 "blobfs_detect", 00:05:55.258 "blobfs_set_cache_size", 00:05:55.258 "bdev_aio_delete", 00:05:55.258 "bdev_aio_rescan", 00:05:55.258 "bdev_aio_create", 00:05:55.258 "bdev_ftl_set_property", 00:05:55.258 "bdev_ftl_get_properties", 00:05:55.258 "bdev_ftl_get_stats", 00:05:55.258 "bdev_ftl_unmap", 00:05:55.258 "bdev_ftl_unload", 00:05:55.258 "bdev_ftl_delete", 00:05:55.258 "bdev_ftl_load", 00:05:55.258 "bdev_ftl_create", 00:05:55.258 "bdev_virtio_attach_controller", 00:05:55.258 "bdev_virtio_scsi_get_devices", 00:05:55.258 "bdev_virtio_detach_controller", 00:05:55.258 "bdev_virtio_blk_set_hotplug", 00:05:55.258 "bdev_iscsi_delete", 00:05:55.258 "bdev_iscsi_create", 00:05:55.258 "bdev_iscsi_set_options", 00:05:55.258 "accel_error_inject_error", 00:05:55.258 "ioat_scan_accel_module", 00:05:55.258 "dsa_scan_accel_module", 00:05:55.258 "iaa_scan_accel_module", 00:05:55.258 "vfu_virtio_create_scsi_endpoint", 00:05:55.258 "vfu_virtio_scsi_remove_target", 00:05:55.258 "vfu_virtio_scsi_add_target", 00:05:55.258 "vfu_virtio_create_blk_endpoint", 00:05:55.258 "vfu_virtio_delete_endpoint", 00:05:55.258 "keyring_file_remove_key", 00:05:55.258 "keyring_file_add_key", 00:05:55.258 "keyring_linux_set_options", 00:05:55.258 "iscsi_get_histogram", 00:05:55.258 "iscsi_enable_histogram", 00:05:55.258 "iscsi_set_options", 00:05:55.258 "iscsi_get_auth_groups", 00:05:55.258 "iscsi_auth_group_remove_secret", 00:05:55.258 "iscsi_auth_group_add_secret", 00:05:55.258 "iscsi_delete_auth_group", 00:05:55.258 "iscsi_create_auth_group", 00:05:55.258 "iscsi_set_discovery_auth", 00:05:55.258 "iscsi_get_options", 00:05:55.258 "iscsi_target_node_request_logout", 00:05:55.258 "iscsi_target_node_set_redirect", 00:05:55.258 "iscsi_target_node_set_auth", 00:05:55.258 "iscsi_target_node_add_lun", 00:05:55.258 "iscsi_get_stats", 00:05:55.258 "iscsi_get_connections", 00:05:55.258 "iscsi_portal_group_set_auth", 00:05:55.258 "iscsi_start_portal_group", 00:05:55.258 "iscsi_delete_portal_group", 00:05:55.258 "iscsi_create_portal_group", 00:05:55.258 "iscsi_get_portal_groups", 00:05:55.258 "iscsi_delete_target_node", 00:05:55.258 "iscsi_target_node_remove_pg_ig_maps", 00:05:55.258 "iscsi_target_node_add_pg_ig_maps", 00:05:55.258 "iscsi_create_target_node", 00:05:55.258 "iscsi_get_target_nodes", 00:05:55.258 "iscsi_delete_initiator_group", 00:05:55.258 "iscsi_initiator_group_remove_initiators", 00:05:55.258 "iscsi_initiator_group_add_initiators", 00:05:55.258 "iscsi_create_initiator_group", 00:05:55.258 "iscsi_get_initiator_groups", 00:05:55.258 "nvmf_set_crdt", 00:05:55.258 "nvmf_set_config", 00:05:55.258 "nvmf_set_max_subsystems", 00:05:55.258 "nvmf_stop_mdns_prr", 00:05:55.258 "nvmf_publish_mdns_prr", 00:05:55.258 "nvmf_subsystem_get_listeners", 00:05:55.258 "nvmf_subsystem_get_qpairs", 00:05:55.258 "nvmf_subsystem_get_controllers", 00:05:55.258 "nvmf_get_stats", 00:05:55.258 "nvmf_get_transports", 00:05:55.258 "nvmf_create_transport", 00:05:55.258 "nvmf_get_targets", 00:05:55.258 "nvmf_delete_target", 00:05:55.258 "nvmf_create_target", 00:05:55.258 "nvmf_subsystem_allow_any_host", 00:05:55.258 "nvmf_subsystem_remove_host", 00:05:55.258 "nvmf_subsystem_add_host", 00:05:55.258 "nvmf_ns_remove_host", 00:05:55.258 "nvmf_ns_add_host", 00:05:55.258 "nvmf_subsystem_remove_ns", 00:05:55.258 "nvmf_subsystem_add_ns", 00:05:55.258 "nvmf_subsystem_listener_set_ana_state", 00:05:55.258 "nvmf_discovery_get_referrals", 00:05:55.258 "nvmf_discovery_remove_referral", 00:05:55.258 "nvmf_discovery_add_referral", 00:05:55.258 "nvmf_subsystem_remove_listener", 00:05:55.258 "nvmf_subsystem_add_listener", 00:05:55.258 "nvmf_delete_subsystem", 00:05:55.258 "nvmf_create_subsystem", 00:05:55.258 "nvmf_get_subsystems", 00:05:55.258 "env_dpdk_get_mem_stats", 00:05:55.258 "nbd_get_disks", 00:05:55.258 "nbd_stop_disk", 00:05:55.258 "nbd_start_disk", 00:05:55.258 "ublk_recover_disk", 00:05:55.258 "ublk_get_disks", 00:05:55.258 "ublk_stop_disk", 00:05:55.258 "ublk_start_disk", 00:05:55.258 "ublk_destroy_target", 00:05:55.258 "ublk_create_target", 00:05:55.258 "virtio_blk_create_transport", 00:05:55.258 "virtio_blk_get_transports", 00:05:55.258 "vhost_controller_set_coalescing", 00:05:55.258 "vhost_get_controllers", 00:05:55.258 "vhost_delete_controller", 00:05:55.258 "vhost_create_blk_controller", 00:05:55.258 "vhost_scsi_controller_remove_target", 00:05:55.258 "vhost_scsi_controller_add_target", 00:05:55.258 "vhost_start_scsi_controller", 00:05:55.258 "vhost_create_scsi_controller", 00:05:55.258 "thread_set_cpumask", 00:05:55.258 "framework_get_governor", 00:05:55.258 "framework_get_scheduler", 00:05:55.258 "framework_set_scheduler", 00:05:55.258 "framework_get_reactors", 00:05:55.258 "thread_get_io_channels", 00:05:55.258 "thread_get_pollers", 00:05:55.258 "thread_get_stats", 00:05:55.258 "framework_monitor_context_switch", 00:05:55.258 "spdk_kill_instance", 00:05:55.258 "log_enable_timestamps", 00:05:55.258 "log_get_flags", 00:05:55.258 "log_clear_flag", 00:05:55.258 "log_set_flag", 00:05:55.258 "log_get_level", 00:05:55.258 "log_set_level", 00:05:55.258 "log_get_print_level", 00:05:55.258 "log_set_print_level", 00:05:55.258 "framework_enable_cpumask_locks", 00:05:55.258 "framework_disable_cpumask_locks", 00:05:55.258 "framework_wait_init", 00:05:55.258 "framework_start_init", 00:05:55.258 "scsi_get_devices", 00:05:55.258 "bdev_get_histogram", 00:05:55.258 "bdev_enable_histogram", 00:05:55.258 "bdev_set_qos_limit", 00:05:55.258 "bdev_set_qd_sampling_period", 00:05:55.258 "bdev_get_bdevs", 00:05:55.259 "bdev_reset_iostat", 00:05:55.259 "bdev_get_iostat", 00:05:55.259 "bdev_examine", 00:05:55.259 "bdev_wait_for_examine", 00:05:55.259 "bdev_set_options", 00:05:55.259 "notify_get_notifications", 00:05:55.259 "notify_get_types", 00:05:55.259 "accel_get_stats", 00:05:55.259 "accel_set_options", 00:05:55.259 "accel_set_driver", 00:05:55.259 "accel_crypto_key_destroy", 00:05:55.259 "accel_crypto_keys_get", 00:05:55.259 "accel_crypto_key_create", 00:05:55.259 "accel_assign_opc", 00:05:55.259 "accel_get_module_info", 00:05:55.259 "accel_get_opc_assignments", 00:05:55.259 "vmd_rescan", 00:05:55.259 "vmd_remove_device", 00:05:55.259 "vmd_enable", 00:05:55.259 "sock_get_default_impl", 00:05:55.259 "sock_set_default_impl", 00:05:55.259 "sock_impl_set_options", 00:05:55.259 "sock_impl_get_options", 00:05:55.259 "iobuf_get_stats", 00:05:55.259 "iobuf_set_options", 00:05:55.259 "keyring_get_keys", 00:05:55.259 "framework_get_pci_devices", 00:05:55.259 "framework_get_config", 00:05:55.259 "framework_get_subsystems", 00:05:55.259 "vfu_tgt_set_base_path", 00:05:55.259 "trace_get_info", 00:05:55.259 "trace_get_tpoint_group_mask", 00:05:55.259 "trace_disable_tpoint_group", 00:05:55.259 "trace_enable_tpoint_group", 00:05:55.259 "trace_clear_tpoint_mask", 00:05:55.259 "trace_set_tpoint_mask", 00:05:55.259 "spdk_get_version", 00:05:55.259 "rpc_get_methods" 00:05:55.259 ] 00:05:55.259 14:10:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:55.259 14:10:47 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:55.259 14:10:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.259 14:10:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:55.259 14:10:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2369535 00:05:55.259 14:10:47 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2369535 ']' 00:05:55.259 14:10:47 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2369535 00:05:55.259 14:10:47 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:55.259 14:10:47 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.259 14:10:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2369535 00:05:55.259 14:10:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.259 14:10:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.259 14:10:47 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2369535' 00:05:55.259 killing process with pid 2369535 00:05:55.259 14:10:47 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2369535 00:05:55.259 14:10:47 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2369535 00:05:55.518 00:05:55.518 real 0m1.498s 00:05:55.518 user 0m2.787s 00:05:55.518 sys 0m0.435s 00:05:55.518 14:10:47 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.518 14:10:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.518 ************************************ 00:05:55.518 END TEST spdkcli_tcp 00:05:55.518 ************************************ 00:05:55.518 14:10:47 -- common/autotest_common.sh@1142 -- # return 0 00:05:55.518 14:10:47 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:55.518 14:10:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.518 14:10:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.518 14:10:47 -- common/autotest_common.sh@10 -- # set +x 00:05:55.518 ************************************ 00:05:55.518 START TEST dpdk_mem_utility 00:05:55.518 ************************************ 00:05:55.518 14:10:47 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:55.778 * Looking for test storage... 00:05:55.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:55.778 14:10:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:55.778 14:10:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2369839 00:05:55.778 14:10:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2369839 00:05:55.778 14:10:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.778 14:10:47 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2369839 ']' 00:05:55.778 14:10:47 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.778 14:10:47 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.778 14:10:47 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.778 14:10:47 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.778 14:10:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:55.778 [2024-07-12 14:10:47.612876] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:05:55.778 [2024-07-12 14:10:47.612923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369839 ] 00:05:55.778 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.778 [2024-07-12 14:10:47.666750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.778 [2024-07-12 14:10:47.740813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.715 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.715 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:56.715 14:10:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:56.715 14:10:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:56.715 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.715 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:56.715 { 00:05:56.715 "filename": "/tmp/spdk_mem_dump.txt" 00:05:56.715 } 00:05:56.715 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.715 14:10:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:56.715 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:56.715 1 heaps totaling size 814.000000 MiB 00:05:56.715 size: 814.000000 MiB heap id: 0 00:05:56.715 end heaps---------- 00:05:56.715 8 mempools totaling size 598.116089 MiB 00:05:56.715 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:56.715 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:56.715 size: 84.521057 MiB name: bdev_io_2369839 00:05:56.715 size: 51.011292 MiB name: evtpool_2369839 00:05:56.715 size: 50.003479 MiB name: msgpool_2369839 00:05:56.715 size: 21.763794 MiB name: PDU_Pool 00:05:56.715 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:56.715 size: 0.026123 MiB name: Session_Pool 00:05:56.715 end mempools------- 00:05:56.715 6 memzones totaling size 4.142822 MiB 00:05:56.715 size: 1.000366 MiB name: RG_ring_0_2369839 00:05:56.715 size: 1.000366 MiB name: RG_ring_1_2369839 00:05:56.715 size: 1.000366 MiB name: RG_ring_4_2369839 00:05:56.715 size: 1.000366 MiB name: RG_ring_5_2369839 00:05:56.715 size: 0.125366 MiB name: RG_ring_2_2369839 00:05:56.715 size: 0.015991 MiB name: RG_ring_3_2369839 00:05:56.715 end memzones------- 00:05:56.715 14:10:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:56.715 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:56.715 list of free elements. size: 12.519348 MiB 00:05:56.715 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:56.715 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:56.715 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:56.715 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:56.715 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:56.715 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:56.715 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:56.715 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:56.715 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:56.715 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:56.715 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:56.715 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:56.715 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:56.715 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:56.715 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:56.715 list of standard malloc elements. size: 199.218079 MiB 00:05:56.715 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:56.715 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:56.715 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:56.715 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:56.715 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:56.715 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:56.715 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:56.715 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:56.715 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:56.715 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:56.715 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:56.715 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:56.715 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:56.715 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:56.715 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:56.716 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:56.716 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:56.716 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:56.716 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:56.716 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:56.716 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:56.716 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:56.716 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:56.716 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:56.716 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:56.716 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:56.716 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:56.716 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:56.716 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:56.716 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:56.716 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:56.716 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:56.716 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:56.716 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:56.716 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:56.716 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:56.716 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:56.716 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:56.716 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:56.716 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:56.716 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:56.716 list of memzone associated elements. size: 602.262573 MiB 00:05:56.716 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:56.716 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:56.716 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:56.716 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:56.716 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:56.716 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2369839_0 00:05:56.716 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:56.716 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2369839_0 00:05:56.716 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:56.716 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2369839_0 00:05:56.716 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:56.716 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:56.716 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:56.716 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:56.716 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:56.716 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2369839 00:05:56.716 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:56.716 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2369839 00:05:56.716 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:56.716 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2369839 00:05:56.716 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:56.716 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:56.716 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:56.716 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:56.716 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:56.716 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:56.716 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:56.716 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:56.716 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:56.716 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2369839 00:05:56.716 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:56.716 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2369839 00:05:56.716 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:56.716 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2369839 00:05:56.716 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:56.716 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2369839 00:05:56.716 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:56.716 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2369839 00:05:56.716 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:56.716 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:56.716 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:56.716 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:56.716 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:56.716 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:56.716 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:56.716 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2369839 00:05:56.716 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:56.716 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:56.716 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:56.716 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:56.716 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:56.716 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2369839 00:05:56.716 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:56.716 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:56.716 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:56.716 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2369839 00:05:56.716 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:56.716 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2369839 00:05:56.716 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:56.716 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:56.716 14:10:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:56.716 14:10:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2369839 00:05:56.716 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2369839 ']' 00:05:56.716 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2369839 00:05:56.716 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:56.716 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.716 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2369839 00:05:56.716 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.716 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.716 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2369839' 00:05:56.716 killing process with pid 2369839 00:05:56.716 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2369839 00:05:56.716 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2369839 00:05:56.975 00:05:56.975 real 0m1.395s 00:05:56.975 user 0m1.489s 00:05:56.975 sys 0m0.386s 00:05:56.975 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.975 14:10:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:56.975 ************************************ 00:05:56.975 END TEST dpdk_mem_utility 00:05:56.975 ************************************ 00:05:56.975 14:10:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.975 14:10:48 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:56.975 14:10:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.975 14:10:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.975 14:10:48 -- common/autotest_common.sh@10 -- # set +x 00:05:56.975 ************************************ 00:05:56.975 START TEST event 00:05:56.975 ************************************ 00:05:56.975 14:10:48 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:57.233 * Looking for test storage... 00:05:57.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:57.233 14:10:49 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:57.233 14:10:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:57.233 14:10:49 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:57.233 14:10:49 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:57.233 14:10:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.233 14:10:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.233 ************************************ 00:05:57.233 START TEST event_perf 00:05:57.233 ************************************ 00:05:57.233 14:10:49 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:57.233 Running I/O for 1 seconds...[2024-07-12 14:10:49.072926] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:05:57.233 [2024-07-12 14:10:49.072993] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370130 ] 00:05:57.233 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.233 [2024-07-12 14:10:49.131335] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.233 [2024-07-12 14:10:49.207255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.233 [2024-07-12 14:10:49.207354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.233 [2024-07-12 14:10:49.207441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.233 [2024-07-12 14:10:49.207443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.610 Running I/O for 1 seconds... 00:05:58.610 lcore 0: 207087 00:05:58.610 lcore 1: 207087 00:05:58.610 lcore 2: 207086 00:05:58.610 lcore 3: 207085 00:05:58.610 done. 00:05:58.610 00:05:58.610 real 0m1.228s 00:05:58.610 user 0m4.146s 00:05:58.610 sys 0m0.078s 00:05:58.610 14:10:50 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.610 14:10:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:58.610 ************************************ 00:05:58.610 END TEST event_perf 00:05:58.610 ************************************ 00:05:58.610 14:10:50 event -- common/autotest_common.sh@1142 -- # return 0 00:05:58.610 14:10:50 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:58.610 14:10:50 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:58.610 14:10:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.610 14:10:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.610 ************************************ 00:05:58.610 START TEST event_reactor 00:05:58.610 ************************************ 00:05:58.610 14:10:50 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:58.610 [2024-07-12 14:10:50.358941] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:05:58.610 [2024-07-12 14:10:50.359005] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370382 ] 00:05:58.610 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.610 [2024-07-12 14:10:50.414701] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.610 [2024-07-12 14:10:50.485745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.547 test_start 00:05:59.547 oneshot 00:05:59.547 tick 100 00:05:59.547 tick 100 00:05:59.547 tick 250 00:05:59.547 tick 100 00:05:59.547 tick 100 00:05:59.547 tick 100 00:05:59.547 tick 250 00:05:59.547 tick 500 00:05:59.547 tick 100 00:05:59.547 tick 100 00:05:59.547 tick 250 00:05:59.547 tick 100 00:05:59.547 tick 100 00:05:59.547 test_end 00:05:59.547 00:05:59.547 real 0m1.212s 00:05:59.547 user 0m1.137s 00:05:59.547 sys 0m0.071s 00:05:59.547 14:10:51 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.547 14:10:51 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:59.547 ************************************ 00:05:59.547 END TEST event_reactor 00:05:59.547 ************************************ 00:05:59.806 14:10:51 event -- common/autotest_common.sh@1142 -- # return 0 00:05:59.806 14:10:51 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.806 14:10:51 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:59.806 14:10:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.806 14:10:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.806 ************************************ 00:05:59.806 START TEST event_reactor_perf 00:05:59.806 ************************************ 00:05:59.806 14:10:51 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.806 [2024-07-12 14:10:51.631099] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:05:59.806 [2024-07-12 14:10:51.631164] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370635 ] 00:05:59.806 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.806 [2024-07-12 14:10:51.687771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.806 [2024-07-12 14:10:51.758588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.182 test_start 00:06:01.182 test_end 00:06:01.182 Performance: 506852 events per second 00:06:01.182 00:06:01.182 real 0m1.214s 00:06:01.182 user 0m1.141s 00:06:01.182 sys 0m0.069s 00:06:01.182 14:10:52 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.182 14:10:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.182 ************************************ 00:06:01.182 END TEST event_reactor_perf 00:06:01.182 ************************************ 00:06:01.182 14:10:52 event -- common/autotest_common.sh@1142 -- # return 0 00:06:01.182 14:10:52 event -- event/event.sh@49 -- # uname -s 00:06:01.182 14:10:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:01.182 14:10:52 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:01.182 14:10:52 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.182 14:10:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.182 14:10:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.182 ************************************ 00:06:01.182 START TEST event_scheduler 00:06:01.182 ************************************ 00:06:01.182 14:10:52 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:01.182 * Looking for test storage... 00:06:01.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:01.182 14:10:52 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:01.182 14:10:52 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2370907 00:06:01.182 14:10:52 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.182 14:10:52 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:01.182 14:10:52 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2370907 00:06:01.182 14:10:52 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2370907 ']' 00:06:01.182 14:10:52 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.182 14:10:52 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.182 14:10:52 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.182 14:10:52 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.182 14:10:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:01.182 [2024-07-12 14:10:53.013924] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:01.182 [2024-07-12 14:10:53.013971] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370907 ] 00:06:01.182 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.182 [2024-07-12 14:10:53.063940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.182 [2024-07-12 14:10:53.140083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.182 [2024-07-12 14:10:53.140165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.182 [2024-07-12 14:10:53.140272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.182 [2024-07-12 14:10:53.140273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.117 14:10:53 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.117 14:10:53 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:02.117 14:10:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:02.117 14:10:53 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.117 14:10:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.117 [2024-07-12 14:10:53.834699] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:02.117 [2024-07-12 14:10:53.834717] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:02.117 [2024-07-12 14:10:53.834726] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:02.117 [2024-07-12 14:10:53.834731] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:02.117 [2024-07-12 14:10:53.834737] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:02.117 14:10:53 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.117 14:10:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:02.117 14:10:53 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.117 14:10:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.117 [2024-07-12 14:10:53.905261] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:02.117 14:10:53 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.117 14:10:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:02.117 14:10:53 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.117 14:10:53 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.117 14:10:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.117 ************************************ 00:06:02.117 START TEST scheduler_create_thread 00:06:02.117 ************************************ 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.117 2 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.117 3 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.117 4 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.117 5 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.117 6 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.117 7 00:06:02.117 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.118 14:10:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:02.118 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.118 14:10:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.118 8 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.118 9 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.118 10 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.118 14:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.494 14:10:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.494 14:10:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:03.494 14:10:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:03.495 14:10:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.495 14:10:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.873 14:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.873 00:06:04.873 real 0m2.619s 00:06:04.873 user 0m0.021s 00:06:04.873 sys 0m0.006s 00:06:04.873 14:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.873 14:10:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.873 ************************************ 00:06:04.873 END TEST scheduler_create_thread 00:06:04.873 ************************************ 00:06:04.873 14:10:56 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:04.873 14:10:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:04.873 14:10:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2370907 00:06:04.873 14:10:56 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2370907 ']' 00:06:04.873 14:10:56 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2370907 00:06:04.873 14:10:56 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:04.873 14:10:56 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.873 14:10:56 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2370907 00:06:04.873 14:10:56 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:04.873 14:10:56 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:04.873 14:10:56 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2370907' 00:06:04.873 killing process with pid 2370907 00:06:04.873 14:10:56 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2370907 00:06:04.873 14:10:56 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2370907 00:06:05.133 [2024-07-12 14:10:57.039425] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:05.392 00:06:05.392 real 0m4.344s 00:06:05.392 user 0m8.276s 00:06:05.392 sys 0m0.356s 00:06:05.392 14:10:57 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.392 14:10:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.392 ************************************ 00:06:05.392 END TEST event_scheduler 00:06:05.392 ************************************ 00:06:05.392 14:10:57 event -- common/autotest_common.sh@1142 -- # return 0 00:06:05.392 14:10:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:05.392 14:10:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:05.392 14:10:57 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.392 14:10:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.392 14:10:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.392 ************************************ 00:06:05.392 START TEST app_repeat 00:06:05.392 ************************************ 00:06:05.392 14:10:57 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:05.392 14:10:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.392 14:10:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.392 14:10:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:05.392 14:10:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.392 14:10:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:05.392 14:10:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:05.392 14:10:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:05.392 14:10:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2371649 00:06:05.392 14:10:57 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:05.392 14:10:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.392 14:10:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2371649' 00:06:05.392 Process app_repeat pid: 2371649 00:06:05.392 14:10:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.392 14:10:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:05.392 spdk_app_start Round 0 00:06:05.392 14:10:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2371649 /var/tmp/spdk-nbd.sock 00:06:05.392 14:10:57 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2371649 ']' 00:06:05.392 14:10:57 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.392 14:10:57 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.392 14:10:57 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.392 14:10:57 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.392 14:10:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.392 [2024-07-12 14:10:57.336461] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:05.392 [2024-07-12 14:10:57.336511] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371649 ] 00:06:05.392 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.392 [2024-07-12 14:10:57.392659] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.673 [2024-07-12 14:10:57.469655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.673 [2024-07-12 14:10:57.469658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.673 14:10:57 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.673 14:10:57 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:05.673 14:10:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.932 Malloc0 00:06:05.932 14:10:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.932 Malloc1 00:06:05.932 14:10:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.932 14:10:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.932 14:10:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.932 14:10:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.932 14:10:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.932 14:10:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.932 14:10:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.932 14:10:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.932 14:10:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.932 14:10:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.932 14:10:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.932 14:10:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.932 14:10:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.932 14:10:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.932 14:10:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.932 14:10:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.191 /dev/nbd0 00:06:06.191 14:10:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.191 14:10:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.191 14:10:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:06.191 14:10:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:06.191 14:10:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:06.191 14:10:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:06.191 14:10:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:06.191 14:10:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:06.191 14:10:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:06.191 14:10:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:06.191 14:10:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.191 1+0 records in 00:06:06.191 1+0 records out 00:06:06.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223945 s, 18.3 MB/s 00:06:06.191 14:10:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.191 14:10:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:06.191 14:10:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.191 14:10:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:06.191 14:10:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:06.191 14:10:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.191 14:10:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.191 14:10:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.450 /dev/nbd1 00:06:06.450 14:10:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.450 14:10:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.450 14:10:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:06.450 14:10:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:06.450 14:10:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:06.450 14:10:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:06.450 14:10:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:06.450 14:10:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:06.450 14:10:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:06.450 14:10:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:06.450 14:10:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.450 1+0 records in 00:06:06.450 1+0 records out 00:06:06.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200002 s, 20.5 MB/s 00:06:06.450 14:10:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.450 14:10:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:06.450 14:10:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.450 14:10:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:06.450 14:10:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:06.450 14:10:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.450 14:10:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.450 14:10:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.450 14:10:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.450 14:10:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.710 { 00:06:06.710 "nbd_device": "/dev/nbd0", 00:06:06.710 "bdev_name": "Malloc0" 00:06:06.710 }, 00:06:06.710 { 00:06:06.710 "nbd_device": "/dev/nbd1", 00:06:06.710 "bdev_name": "Malloc1" 00:06:06.710 } 00:06:06.710 ]' 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.710 { 00:06:06.710 "nbd_device": "/dev/nbd0", 00:06:06.710 "bdev_name": "Malloc0" 00:06:06.710 }, 00:06:06.710 { 00:06:06.710 "nbd_device": "/dev/nbd1", 00:06:06.710 "bdev_name": "Malloc1" 00:06:06.710 } 00:06:06.710 ]' 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.710 /dev/nbd1' 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.710 /dev/nbd1' 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.710 256+0 records in 00:06:06.710 256+0 records out 00:06:06.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00332675 s, 315 MB/s 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.710 256+0 records in 00:06:06.710 256+0 records out 00:06:06.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132368 s, 79.2 MB/s 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.710 256+0 records in 00:06:06.710 256+0 records out 00:06:06.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154082 s, 68.1 MB/s 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.710 14:10:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.969 14:10:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.969 14:10:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.969 14:10:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.969 14:10:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.969 14:10:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.969 14:10:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.969 14:10:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.969 14:10:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.969 14:10:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.969 14:10:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.228 14:10:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.228 14:10:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.228 14:10:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.228 14:10:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.228 14:10:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.228 14:10:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.228 14:10:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.228 14:10:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.228 14:10:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.228 14:10:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.228 14:10:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.228 14:10:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.228 14:10:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.228 14:10:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.228 14:10:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.228 14:10:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.228 14:10:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.228 14:10:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:07.228 14:10:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.228 14:10:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.228 14:10:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.228 14:10:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.228 14:10:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.228 14:10:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.487 14:10:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.746 [2024-07-12 14:10:59.601044] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.746 [2024-07-12 14:10:59.669373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.746 [2024-07-12 14:10:59.669382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.746 [2024-07-12 14:10:59.709966] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.746 [2024-07-12 14:10:59.710006] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.035 14:11:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.035 14:11:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:11.035 spdk_app_start Round 1 00:06:11.035 14:11:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2371649 /var/tmp/spdk-nbd.sock 00:06:11.035 14:11:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2371649 ']' 00:06:11.035 14:11:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.035 14:11:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.035 14:11:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.035 14:11:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.035 14:11:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.035 14:11:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.035 14:11:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:11.035 14:11:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.035 Malloc0 00:06:11.035 14:11:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.035 Malloc1 00:06:11.035 14:11:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.035 14:11:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.035 14:11:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.035 14:11:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.035 14:11:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.035 14:11:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.035 14:11:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.035 14:11:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.035 14:11:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.035 14:11:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.035 14:11:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.035 14:11:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.035 14:11:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:11.035 14:11:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.035 14:11:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.035 14:11:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.294 /dev/nbd0 00:06:11.294 14:11:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.294 14:11:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.294 14:11:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:11.294 14:11:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:11.294 14:11:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:11.294 14:11:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:11.294 14:11:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:11.294 14:11:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:11.294 14:11:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:11.294 14:11:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:11.294 14:11:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.294 1+0 records in 00:06:11.294 1+0 records out 00:06:11.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 9.4725e-05 s, 43.2 MB/s 00:06:11.294 14:11:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.294 14:11:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:11.294 14:11:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.294 14:11:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:11.294 14:11:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:11.294 14:11:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.294 14:11:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.294 14:11:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.554 /dev/nbd1 00:06:11.554 14:11:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.554 14:11:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.554 14:11:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:11.554 14:11:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:11.554 14:11:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:11.554 14:11:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:11.554 14:11:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:11.554 14:11:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:11.554 14:11:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:11.554 14:11:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:11.554 14:11:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.554 1+0 records in 00:06:11.554 1+0 records out 00:06:11.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000107092 s, 38.2 MB/s 00:06:11.554 14:11:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.554 14:11:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:11.555 14:11:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.555 14:11:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:11.555 14:11:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:11.555 14:11:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.555 14:11:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.555 14:11:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.555 14:11:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.555 14:11:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.555 14:11:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.555 { 00:06:11.555 "nbd_device": "/dev/nbd0", 00:06:11.555 "bdev_name": "Malloc0" 00:06:11.555 }, 00:06:11.555 { 00:06:11.555 "nbd_device": "/dev/nbd1", 00:06:11.555 "bdev_name": "Malloc1" 00:06:11.555 } 00:06:11.555 ]' 00:06:11.555 14:11:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.555 { 00:06:11.555 "nbd_device": "/dev/nbd0", 00:06:11.555 "bdev_name": "Malloc0" 00:06:11.555 }, 00:06:11.555 { 00:06:11.555 "nbd_device": "/dev/nbd1", 00:06:11.555 "bdev_name": "Malloc1" 00:06:11.555 } 00:06:11.555 ]' 00:06:11.555 14:11:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.873 /dev/nbd1' 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.873 /dev/nbd1' 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.873 256+0 records in 00:06:11.873 256+0 records out 00:06:11.873 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105934 s, 99.0 MB/s 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.873 256+0 records in 00:06:11.873 256+0 records out 00:06:11.873 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142728 s, 73.5 MB/s 00:06:11.873 14:11:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.874 256+0 records in 00:06:11.874 256+0 records out 00:06:11.874 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146738 s, 71.5 MB/s 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.874 14:11:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.133 14:11:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.133 14:11:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.133 14:11:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.133 14:11:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.133 14:11:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.133 14:11:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.133 14:11:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.133 14:11:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.133 14:11:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.133 14:11:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.133 14:11:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.392 14:11:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.392 14:11:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.392 14:11:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.392 14:11:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.392 14:11:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.392 14:11:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.392 14:11:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:12.392 14:11:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.392 14:11:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.392 14:11:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.392 14:11:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.392 14:11:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.392 14:11:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.651 14:11:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.651 [2024-07-12 14:11:04.622269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.911 [2024-07-12 14:11:04.688497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.911 [2024-07-12 14:11:04.688499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.911 [2024-07-12 14:11:04.729098] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.911 [2024-07-12 14:11:04.729137] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.446 14:11:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.446 14:11:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:15.446 spdk_app_start Round 2 00:06:15.446 14:11:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2371649 /var/tmp/spdk-nbd.sock 00:06:15.446 14:11:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2371649 ']' 00:06:15.446 14:11:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.446 14:11:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.446 14:11:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.446 14:11:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.446 14:11:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.705 14:11:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.705 14:11:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:15.705 14:11:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.963 Malloc0 00:06:15.963 14:11:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.222 Malloc1 00:06:16.222 14:11:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.222 14:11:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.222 14:11:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.222 14:11:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.222 14:11:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.222 14:11:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.222 14:11:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.222 14:11:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.222 14:11:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.222 14:11:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.222 14:11:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.222 14:11:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.222 14:11:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:16.222 14:11:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.222 14:11:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.222 14:11:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.222 /dev/nbd0 00:06:16.222 14:11:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.222 14:11:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.222 14:11:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:16.222 14:11:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:16.222 14:11:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:16.222 14:11:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:16.222 14:11:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:16.222 14:11:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:16.222 14:11:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:16.222 14:11:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:16.222 14:11:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.222 1+0 records in 00:06:16.222 1+0 records out 00:06:16.222 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176426 s, 23.2 MB/s 00:06:16.222 14:11:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.222 14:11:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:16.222 14:11:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.222 14:11:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:16.222 14:11:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:16.222 14:11:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.222 14:11:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.222 14:11:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.480 /dev/nbd1 00:06:16.480 14:11:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.480 14:11:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.481 14:11:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:16.481 14:11:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:16.481 14:11:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:16.481 14:11:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:16.481 14:11:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:16.481 14:11:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:16.481 14:11:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:16.481 14:11:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:16.481 14:11:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.481 1+0 records in 00:06:16.481 1+0 records out 00:06:16.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231502 s, 17.7 MB/s 00:06:16.481 14:11:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.481 14:11:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:16.481 14:11:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.481 14:11:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:16.481 14:11:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:16.481 14:11:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.481 14:11:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.481 14:11:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.481 14:11:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.481 14:11:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.739 14:11:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.739 { 00:06:16.739 "nbd_device": "/dev/nbd0", 00:06:16.739 "bdev_name": "Malloc0" 00:06:16.739 }, 00:06:16.739 { 00:06:16.739 "nbd_device": "/dev/nbd1", 00:06:16.739 "bdev_name": "Malloc1" 00:06:16.739 } 00:06:16.739 ]' 00:06:16.739 14:11:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.739 { 00:06:16.739 "nbd_device": "/dev/nbd0", 00:06:16.739 "bdev_name": "Malloc0" 00:06:16.739 }, 00:06:16.739 { 00:06:16.739 "nbd_device": "/dev/nbd1", 00:06:16.739 "bdev_name": "Malloc1" 00:06:16.739 } 00:06:16.739 ]' 00:06:16.739 14:11:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.739 14:11:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.739 /dev/nbd1' 00:06:16.739 14:11:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.739 /dev/nbd1' 00:06:16.739 14:11:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.739 14:11:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.739 14:11:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.739 14:11:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.739 14:11:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.739 14:11:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.739 14:11:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.739 14:11:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.739 14:11:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.739 14:11:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.739 14:11:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.740 256+0 records in 00:06:16.740 256+0 records out 00:06:16.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103101 s, 102 MB/s 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.740 256+0 records in 00:06:16.740 256+0 records out 00:06:16.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133294 s, 78.7 MB/s 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.740 256+0 records in 00:06:16.740 256+0 records out 00:06:16.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152911 s, 68.6 MB/s 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.740 14:11:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.998 14:11:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.998 14:11:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.998 14:11:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.998 14:11:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.998 14:11:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.998 14:11:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.998 14:11:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.998 14:11:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.998 14:11:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.998 14:11:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.256 14:11:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.257 14:11:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.257 14:11:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.257 14:11:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.257 14:11:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.257 14:11:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.257 14:11:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.257 14:11:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.257 14:11:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.257 14:11:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.257 14:11:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.257 14:11:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.257 14:11:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.257 14:11:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.516 14:11:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.516 14:11:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.516 14:11:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.516 14:11:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.516 14:11:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.516 14:11:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.516 14:11:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.516 14:11:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.516 14:11:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.516 14:11:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.516 14:11:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:17.775 [2024-07-12 14:11:09.643822] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.775 [2024-07-12 14:11:09.710311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.775 [2024-07-12 14:11:09.710313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.775 [2024-07-12 14:11:09.751131] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.775 [2024-07-12 14:11:09.751168] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.064 14:11:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2371649 /var/tmp/spdk-nbd.sock 00:06:21.064 14:11:12 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2371649 ']' 00:06:21.064 14:11:12 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.064 14:11:12 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.064 14:11:12 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.064 14:11:12 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.064 14:11:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.064 14:11:12 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.064 14:11:12 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:21.064 14:11:12 event.app_repeat -- event/event.sh@39 -- # killprocess 2371649 00:06:21.064 14:11:12 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2371649 ']' 00:06:21.064 14:11:12 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2371649 00:06:21.064 14:11:12 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:21.064 14:11:12 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.065 14:11:12 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2371649 00:06:21.065 14:11:12 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.065 14:11:12 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.065 14:11:12 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2371649' 00:06:21.065 killing process with pid 2371649 00:06:21.065 14:11:12 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2371649 00:06:21.065 14:11:12 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2371649 00:06:21.065 spdk_app_start is called in Round 0. 00:06:21.065 Shutdown signal received, stop current app iteration 00:06:21.065 Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 reinitialization... 00:06:21.065 spdk_app_start is called in Round 1. 00:06:21.065 Shutdown signal received, stop current app iteration 00:06:21.065 Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 reinitialization... 00:06:21.065 spdk_app_start is called in Round 2. 00:06:21.065 Shutdown signal received, stop current app iteration 00:06:21.065 Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 reinitialization... 00:06:21.065 spdk_app_start is called in Round 3. 00:06:21.065 Shutdown signal received, stop current app iteration 00:06:21.065 14:11:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:21.065 14:11:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:21.065 00:06:21.065 real 0m15.540s 00:06:21.065 user 0m33.600s 00:06:21.065 sys 0m2.291s 00:06:21.065 14:11:12 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.065 14:11:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.065 ************************************ 00:06:21.065 END TEST app_repeat 00:06:21.065 ************************************ 00:06:21.065 14:11:12 event -- common/autotest_common.sh@1142 -- # return 0 00:06:21.065 14:11:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:21.065 14:11:12 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:21.065 14:11:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.065 14:11:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.065 14:11:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.065 ************************************ 00:06:21.065 START TEST cpu_locks 00:06:21.065 ************************************ 00:06:21.065 14:11:12 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:21.065 * Looking for test storage... 00:06:21.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:21.065 14:11:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:21.065 14:11:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:21.065 14:11:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:21.065 14:11:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:21.065 14:11:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.065 14:11:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.065 14:11:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.065 ************************************ 00:06:21.065 START TEST default_locks 00:06:21.065 ************************************ 00:06:21.065 14:11:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:21.065 14:11:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2374435 00:06:21.065 14:11:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2374435 00:06:21.065 14:11:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.065 14:11:13 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2374435 ']' 00:06:21.065 14:11:13 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.065 14:11:13 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.065 14:11:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.065 14:11:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.065 14:11:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.065 [2024-07-12 14:11:13.073086] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:21.065 [2024-07-12 14:11:13.073135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374435 ] 00:06:21.323 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.323 [2024-07-12 14:11:13.126553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.323 [2024-07-12 14:11:13.206295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.890 14:11:13 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.890 14:11:13 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:21.890 14:11:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2374435 00:06:21.890 14:11:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2374435 00:06:21.890 14:11:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.459 lslocks: write error 00:06:22.459 14:11:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2374435 00:06:22.459 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2374435 ']' 00:06:22.459 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2374435 00:06:22.459 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:22.459 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.459 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2374435 00:06:22.459 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.459 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.459 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2374435' 00:06:22.459 killing process with pid 2374435 00:06:22.459 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2374435 00:06:22.459 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2374435 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2374435 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2374435 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2374435 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2374435 ']' 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2374435) - No such process 00:06:22.718 ERROR: process (pid: 2374435) is no longer running 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.718 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.719 14:11:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:22.719 14:11:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:22.719 14:11:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:22.719 14:11:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:22.719 00:06:22.719 real 0m1.646s 00:06:22.719 user 0m1.731s 00:06:22.719 sys 0m0.537s 00:06:22.719 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.719 14:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.719 ************************************ 00:06:22.719 END TEST default_locks 00:06:22.719 ************************************ 00:06:22.719 14:11:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:22.719 14:11:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:22.719 14:11:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.719 14:11:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.719 14:11:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.978 ************************************ 00:06:22.978 START TEST default_locks_via_rpc 00:06:22.978 ************************************ 00:06:22.978 14:11:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:22.978 14:11:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2374897 00:06:22.978 14:11:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2374897 00:06:22.978 14:11:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.978 14:11:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2374897 ']' 00:06:22.978 14:11:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.978 14:11:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.978 14:11:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.978 14:11:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.978 14:11:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.978 [2024-07-12 14:11:14.780617] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:22.978 [2024-07-12 14:11:14.780657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374897 ] 00:06:22.978 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.978 [2024-07-12 14:11:14.832832] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.978 [2024-07-12 14:11:14.912278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2374897 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2374897 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2374897 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2374897 ']' 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2374897 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2374897 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2374897' 00:06:23.916 killing process with pid 2374897 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2374897 00:06:23.916 14:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2374897 00:06:24.485 00:06:24.485 real 0m1.484s 00:06:24.485 user 0m1.575s 00:06:24.485 sys 0m0.461s 00:06:24.485 14:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.485 14:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.485 ************************************ 00:06:24.485 END TEST default_locks_via_rpc 00:06:24.485 ************************************ 00:06:24.485 14:11:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:24.485 14:11:16 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:24.485 14:11:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.485 14:11:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.485 14:11:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.485 ************************************ 00:06:24.485 START TEST non_locking_app_on_locked_coremask 00:06:24.485 ************************************ 00:06:24.485 14:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:24.485 14:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2375159 00:06:24.485 14:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2375159 /var/tmp/spdk.sock 00:06:24.485 14:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.485 14:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2375159 ']' 00:06:24.485 14:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.485 14:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.485 14:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.485 14:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.485 14:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.485 [2024-07-12 14:11:16.328119] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:24.486 [2024-07-12 14:11:16.328160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375159 ] 00:06:24.486 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.486 [2024-07-12 14:11:16.380219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.486 [2024-07-12 14:11:16.459672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.424 14:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.424 14:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:25.424 14:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2375189 00:06:25.424 14:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2375189 /var/tmp/spdk2.sock 00:06:25.424 14:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:25.424 14:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2375189 ']' 00:06:25.424 14:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.424 14:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.424 14:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.424 14:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.424 14:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.424 [2024-07-12 14:11:17.176308] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:25.424 [2024-07-12 14:11:17.176356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375189 ] 00:06:25.424 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.424 [2024-07-12 14:11:17.252903] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.424 [2024-07-12 14:11:17.252929] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.424 [2024-07-12 14:11:17.405186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.992 14:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.992 14:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:25.992 14:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2375159 00:06:25.992 14:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2375159 00:06:25.992 14:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.930 lslocks: write error 00:06:26.930 14:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2375159 00:06:26.930 14:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2375159 ']' 00:06:26.930 14:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2375159 00:06:26.930 14:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:26.930 14:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.930 14:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2375159 00:06:26.930 14:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:26.930 14:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:26.930 14:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2375159' 00:06:26.930 killing process with pid 2375159 00:06:26.930 14:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2375159 00:06:26.930 14:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2375159 00:06:27.498 14:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2375189 00:06:27.498 14:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2375189 ']' 00:06:27.498 14:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2375189 00:06:27.498 14:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:27.498 14:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.498 14:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2375189 00:06:27.498 14:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.498 14:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.498 14:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2375189' 00:06:27.498 killing process with pid 2375189 00:06:27.498 14:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2375189 00:06:27.498 14:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2375189 00:06:27.757 00:06:27.757 real 0m3.291s 00:06:27.757 user 0m3.549s 00:06:27.757 sys 0m0.904s 00:06:27.757 14:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.757 14:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.757 ************************************ 00:06:27.757 END TEST non_locking_app_on_locked_coremask 00:06:27.757 ************************************ 00:06:27.757 14:11:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:27.757 14:11:19 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:27.757 14:11:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.757 14:11:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.757 14:11:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.757 ************************************ 00:06:27.757 START TEST locking_app_on_unlocked_coremask 00:06:27.757 ************************************ 00:06:27.757 14:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:27.757 14:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2375666 00:06:27.757 14:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2375666 /var/tmp/spdk.sock 00:06:27.757 14:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:27.757 14:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2375666 ']' 00:06:27.757 14:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.757 14:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.757 14:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.757 14:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.757 14:11:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.757 [2024-07-12 14:11:19.685202] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:27.757 [2024-07-12 14:11:19.685242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375666 ] 00:06:27.757 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.757 [2024-07-12 14:11:19.737702] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.757 [2024-07-12 14:11:19.737726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.017 [2024-07-12 14:11:19.816782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.585 14:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.585 14:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:28.585 14:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2375897 00:06:28.585 14:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2375897 /var/tmp/spdk2.sock 00:06:28.585 14:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:28.585 14:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2375897 ']' 00:06:28.585 14:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.585 14:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.585 14:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.585 14:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.585 14:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.585 [2024-07-12 14:11:20.546648] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:28.585 [2024-07-12 14:11:20.546697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375897 ] 00:06:28.585 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.845 [2024-07-12 14:11:20.622886] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.845 [2024-07-12 14:11:20.775057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.413 14:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.413 14:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:29.413 14:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2375897 00:06:29.413 14:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2375897 00:06:29.413 14:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.980 lslocks: write error 00:06:29.980 14:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2375666 00:06:29.980 14:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2375666 ']' 00:06:29.980 14:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2375666 00:06:29.980 14:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:29.980 14:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.980 14:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2375666 00:06:29.980 14:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.980 14:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.980 14:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2375666' 00:06:29.980 killing process with pid 2375666 00:06:29.980 14:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2375666 00:06:29.980 14:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2375666 00:06:30.548 14:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2375897 00:06:30.548 14:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2375897 ']' 00:06:30.548 14:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2375897 00:06:30.548 14:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:30.548 14:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.548 14:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2375897 00:06:30.548 14:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.548 14:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.548 14:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2375897' 00:06:30.548 killing process with pid 2375897 00:06:30.548 14:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2375897 00:06:30.548 14:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2375897 00:06:30.807 00:06:30.807 real 0m3.135s 00:06:30.807 user 0m3.392s 00:06:30.807 sys 0m0.859s 00:06:30.807 14:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.807 14:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.807 ************************************ 00:06:30.807 END TEST locking_app_on_unlocked_coremask 00:06:30.807 ************************************ 00:06:30.807 14:11:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:30.807 14:11:22 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:30.807 14:11:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.807 14:11:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.807 14:11:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.066 ************************************ 00:06:31.066 START TEST locking_app_on_locked_coremask 00:06:31.066 ************************************ 00:06:31.066 14:11:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:31.066 14:11:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2376289 00:06:31.066 14:11:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.066 14:11:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2376289 /var/tmp/spdk.sock 00:06:31.066 14:11:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2376289 ']' 00:06:31.066 14:11:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.066 14:11:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.066 14:11:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.066 14:11:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.066 14:11:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.066 [2024-07-12 14:11:22.878864] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:31.066 [2024-07-12 14:11:22.878902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376289 ] 00:06:31.066 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.066 [2024-07-12 14:11:22.933103] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.066 [2024-07-12 14:11:23.010123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2376401 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2376401 /var/tmp/spdk2.sock 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2376401 /var/tmp/spdk2.sock 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2376401 /var/tmp/spdk2.sock 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2376401 ']' 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.002 14:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.002 [2024-07-12 14:11:23.719214] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:32.002 [2024-07-12 14:11:23.719261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376401 ] 00:06:32.002 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.002 [2024-07-12 14:11:23.795309] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2376289 has claimed it. 00:06:32.002 [2024-07-12 14:11:23.795350] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:32.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2376401) - No such process 00:06:32.570 ERROR: process (pid: 2376401) is no longer running 00:06:32.570 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.570 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:32.570 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:32.570 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:32.570 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:32.570 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:32.570 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2376289 00:06:32.570 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2376289 00:06:32.570 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.136 lslocks: write error 00:06:33.136 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2376289 00:06:33.136 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2376289 ']' 00:06:33.136 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2376289 00:06:33.136 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:33.136 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.136 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2376289 00:06:33.136 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.136 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.136 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2376289' 00:06:33.136 killing process with pid 2376289 00:06:33.136 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2376289 00:06:33.136 14:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2376289 00:06:33.394 00:06:33.394 real 0m2.391s 00:06:33.394 user 0m2.622s 00:06:33.394 sys 0m0.634s 00:06:33.394 14:11:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.394 14:11:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.394 ************************************ 00:06:33.394 END TEST locking_app_on_locked_coremask 00:06:33.394 ************************************ 00:06:33.394 14:11:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:33.394 14:11:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:33.394 14:11:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.394 14:11:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.394 14:11:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.394 ************************************ 00:06:33.394 START TEST locking_overlapped_coremask 00:06:33.394 ************************************ 00:06:33.394 14:11:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:33.394 14:11:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2376663 00:06:33.394 14:11:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2376663 /var/tmp/spdk.sock 00:06:33.394 14:11:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:33.394 14:11:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2376663 ']' 00:06:33.394 14:11:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.394 14:11:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.394 14:11:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.394 14:11:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.394 14:11:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.394 [2024-07-12 14:11:25.340456] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:33.394 [2024-07-12 14:11:25.340500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376663 ] 00:06:33.394 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.394 [2024-07-12 14:11:25.396020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.651 [2024-07-12 14:11:25.474094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.651 [2024-07-12 14:11:25.474191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.651 [2024-07-12 14:11:25.474192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2376897 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2376897 /var/tmp/spdk2.sock 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2376897 /var/tmp/spdk2.sock 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2376897 /var/tmp/spdk2.sock 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2376897 ']' 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.216 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.216 [2024-07-12 14:11:26.198387] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:34.216 [2024-07-12 14:11:26.198445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376897 ] 00:06:34.216 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.474 [2024-07-12 14:11:26.274325] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2376663 has claimed it. 00:06:34.474 [2024-07-12 14:11:26.274365] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:35.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2376897) - No such process 00:06:35.041 ERROR: process (pid: 2376897) is no longer running 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2376663 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2376663 ']' 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2376663 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2376663 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2376663' 00:06:35.041 killing process with pid 2376663 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2376663 00:06:35.041 14:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2376663 00:06:35.300 00:06:35.300 real 0m1.889s 00:06:35.300 user 0m5.333s 00:06:35.300 sys 0m0.399s 00:06:35.300 14:11:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.300 14:11:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.300 ************************************ 00:06:35.300 END TEST locking_overlapped_coremask 00:06:35.300 ************************************ 00:06:35.300 14:11:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:35.300 14:11:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:35.300 14:11:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.300 14:11:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.300 14:11:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.300 ************************************ 00:06:35.300 START TEST locking_overlapped_coremask_via_rpc 00:06:35.300 ************************************ 00:06:35.300 14:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:35.300 14:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2377153 00:06:35.300 14:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2377153 /var/tmp/spdk.sock 00:06:35.300 14:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:35.300 14:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2377153 ']' 00:06:35.300 14:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.300 14:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.300 14:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.300 14:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.300 14:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.300 [2024-07-12 14:11:27.296304] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:35.300 [2024-07-12 14:11:27.296346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377153 ] 00:06:35.559 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.559 [2024-07-12 14:11:27.350034] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.559 [2024-07-12 14:11:27.350059] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.559 [2024-07-12 14:11:27.419290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.559 [2024-07-12 14:11:27.419391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.559 [2024-07-12 14:11:27.419396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.126 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.126 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:36.126 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2377167 00:06:36.126 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2377167 /var/tmp/spdk2.sock 00:06:36.126 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:36.126 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2377167 ']' 00:06:36.126 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.126 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.126 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.126 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.126 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.126 [2024-07-12 14:11:28.134900] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:36.126 [2024-07-12 14:11:28.134956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377167 ] 00:06:36.385 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.385 [2024-07-12 14:11:28.213698] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.385 [2024-07-12 14:11:28.213727] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.385 [2024-07-12 14:11:28.366417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.385 [2024-07-12 14:11:28.366536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.385 [2024-07-12 14:11:28.366536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:36.953 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.953 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:36.953 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:36.953 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.953 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.953 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.953 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:36.953 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:36.953 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:36.953 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:36.953 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.953 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:36.953 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.953 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:36.953 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.953 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.953 [2024-07-12 14:11:28.961451] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2377153 has claimed it. 00:06:37.211 request: 00:06:37.211 { 00:06:37.211 "method": "framework_enable_cpumask_locks", 00:06:37.211 "req_id": 1 00:06:37.211 } 00:06:37.211 Got JSON-RPC error response 00:06:37.211 response: 00:06:37.211 { 00:06:37.211 "code": -32603, 00:06:37.211 "message": "Failed to claim CPU core: 2" 00:06:37.211 } 00:06:37.211 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:37.211 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:37.211 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.211 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.211 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.211 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2377153 /var/tmp/spdk.sock 00:06:37.211 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2377153 ']' 00:06:37.211 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.211 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.211 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.211 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.211 14:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.211 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.211 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:37.211 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2377167 /var/tmp/spdk2.sock 00:06:37.211 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2377167 ']' 00:06:37.211 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.211 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.211 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.211 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.211 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.469 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.469 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:37.469 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:37.469 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:37.469 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:37.469 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:37.469 00:06:37.469 real 0m2.091s 00:06:37.469 user 0m0.843s 00:06:37.469 sys 0m0.174s 00:06:37.469 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.469 14:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.469 ************************************ 00:06:37.469 END TEST locking_overlapped_coremask_via_rpc 00:06:37.469 ************************************ 00:06:37.469 14:11:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:37.469 14:11:29 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:37.469 14:11:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2377153 ]] 00:06:37.469 14:11:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2377153 00:06:37.469 14:11:29 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2377153 ']' 00:06:37.469 14:11:29 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2377153 00:06:37.469 14:11:29 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:37.469 14:11:29 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.469 14:11:29 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2377153 00:06:37.469 14:11:29 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.469 14:11:29 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.469 14:11:29 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2377153' 00:06:37.469 killing process with pid 2377153 00:06:37.469 14:11:29 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2377153 00:06:37.469 14:11:29 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2377153 00:06:37.726 14:11:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2377167 ]] 00:06:37.726 14:11:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2377167 00:06:37.726 14:11:29 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2377167 ']' 00:06:37.726 14:11:29 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2377167 00:06:37.726 14:11:29 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:37.983 14:11:29 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.983 14:11:29 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2377167 00:06:37.983 14:11:29 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:37.983 14:11:29 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:37.984 14:11:29 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2377167' 00:06:37.984 killing process with pid 2377167 00:06:37.984 14:11:29 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2377167 00:06:37.984 14:11:29 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2377167 00:06:38.242 14:11:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:38.242 14:11:30 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:38.242 14:11:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2377153 ]] 00:06:38.242 14:11:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2377153 00:06:38.242 14:11:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2377153 ']' 00:06:38.242 14:11:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2377153 00:06:38.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2377153) - No such process 00:06:38.242 14:11:30 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2377153 is not found' 00:06:38.242 Process with pid 2377153 is not found 00:06:38.242 14:11:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2377167 ]] 00:06:38.242 14:11:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2377167 00:06:38.242 14:11:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2377167 ']' 00:06:38.242 14:11:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2377167 00:06:38.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2377167) - No such process 00:06:38.242 14:11:30 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2377167 is not found' 00:06:38.242 Process with pid 2377167 is not found 00:06:38.242 14:11:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:38.242 00:06:38.242 real 0m17.194s 00:06:38.242 user 0m29.499s 00:06:38.242 sys 0m4.869s 00:06:38.242 14:11:30 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.242 14:11:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.242 ************************************ 00:06:38.242 END TEST cpu_locks 00:06:38.242 ************************************ 00:06:38.242 14:11:30 event -- common/autotest_common.sh@1142 -- # return 0 00:06:38.242 00:06:38.242 real 0m41.188s 00:06:38.242 user 1m17.974s 00:06:38.242 sys 0m8.046s 00:06:38.242 14:11:30 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.242 14:11:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.242 ************************************ 00:06:38.242 END TEST event 00:06:38.242 ************************************ 00:06:38.242 14:11:30 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.242 14:11:30 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:38.242 14:11:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.242 14:11:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.242 14:11:30 -- common/autotest_common.sh@10 -- # set +x 00:06:38.242 ************************************ 00:06:38.242 START TEST thread 00:06:38.242 ************************************ 00:06:38.242 14:11:30 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:38.501 * Looking for test storage... 00:06:38.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:38.501 14:11:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:38.501 14:11:30 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:38.501 14:11:30 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.501 14:11:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.501 ************************************ 00:06:38.501 START TEST thread_poller_perf 00:06:38.501 ************************************ 00:06:38.501 14:11:30 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:38.501 [2024-07-12 14:11:30.330915] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:38.501 [2024-07-12 14:11:30.330981] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377720 ] 00:06:38.501 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.501 [2024-07-12 14:11:30.389376] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.501 [2024-07-12 14:11:30.462079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.501 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:39.876 ====================================== 00:06:39.876 busy:2307406952 (cyc) 00:06:39.876 total_run_count: 410000 00:06:39.876 tsc_hz: 2300000000 (cyc) 00:06:39.876 ====================================== 00:06:39.876 poller_cost: 5627 (cyc), 2446 (nsec) 00:06:39.876 00:06:39.876 real 0m1.228s 00:06:39.876 user 0m1.150s 00:06:39.876 sys 0m0.073s 00:06:39.876 14:11:31 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.876 14:11:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:39.876 ************************************ 00:06:39.876 END TEST thread_poller_perf 00:06:39.876 ************************************ 00:06:39.876 14:11:31 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:39.876 14:11:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:39.876 14:11:31 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:39.877 14:11:31 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.877 14:11:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.877 ************************************ 00:06:39.877 START TEST thread_poller_perf 00:06:39.877 ************************************ 00:06:39.877 14:11:31 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:39.877 [2024-07-12 14:11:31.625053] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:39.877 [2024-07-12 14:11:31.625120] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377971 ] 00:06:39.877 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.877 [2024-07-12 14:11:31.682608] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.877 [2024-07-12 14:11:31.758634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.877 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:41.254 ====================================== 00:06:41.254 busy:2301623384 (cyc) 00:06:41.254 total_run_count: 5438000 00:06:41.254 tsc_hz: 2300000000 (cyc) 00:06:41.254 ====================================== 00:06:41.254 poller_cost: 423 (cyc), 183 (nsec) 00:06:41.254 00:06:41.254 real 0m1.226s 00:06:41.254 user 0m1.149s 00:06:41.254 sys 0m0.073s 00:06:41.254 14:11:32 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.254 14:11:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.254 ************************************ 00:06:41.254 END TEST thread_poller_perf 00:06:41.254 ************************************ 00:06:41.254 14:11:32 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:41.254 14:11:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:41.254 00:06:41.254 real 0m2.667s 00:06:41.254 user 0m2.386s 00:06:41.254 sys 0m0.291s 00:06:41.254 14:11:32 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.255 14:11:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.255 ************************************ 00:06:41.255 END TEST thread 00:06:41.255 ************************************ 00:06:41.255 14:11:32 -- common/autotest_common.sh@1142 -- # return 0 00:06:41.255 14:11:32 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:41.255 14:11:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.255 14:11:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.255 14:11:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.255 ************************************ 00:06:41.255 START TEST accel 00:06:41.255 ************************************ 00:06:41.255 14:11:32 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:41.255 * Looking for test storage... 00:06:41.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:41.255 14:11:33 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:41.255 14:11:33 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:41.255 14:11:33 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:41.255 14:11:33 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2378258 00:06:41.255 14:11:33 accel -- accel/accel.sh@63 -- # waitforlisten 2378258 00:06:41.255 14:11:33 accel -- common/autotest_common.sh@829 -- # '[' -z 2378258 ']' 00:06:41.255 14:11:33 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.255 14:11:33 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:41.255 14:11:33 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.255 14:11:33 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:41.255 14:11:33 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.255 14:11:33 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.255 14:11:33 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.255 14:11:33 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.255 14:11:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.255 14:11:33 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.255 14:11:33 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.255 14:11:33 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.255 14:11:33 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:41.255 14:11:33 accel -- accel/accel.sh@41 -- # jq -r . 00:06:41.255 [2024-07-12 14:11:33.070301] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:41.255 [2024-07-12 14:11:33.070348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378258 ] 00:06:41.255 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.255 [2024-07-12 14:11:33.124235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.255 [2024-07-12 14:11:33.204434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.189 14:11:33 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.189 14:11:33 accel -- common/autotest_common.sh@862 -- # return 0 00:06:42.189 14:11:33 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:42.189 14:11:33 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:42.189 14:11:33 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:42.189 14:11:33 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:42.189 14:11:33 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:42.189 14:11:33 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:42.189 14:11:33 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:42.190 14:11:33 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.190 14:11:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.190 14:11:33 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.190 14:11:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.190 14:11:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.190 14:11:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.190 14:11:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.190 14:11:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.190 14:11:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.190 14:11:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.190 14:11:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.190 14:11:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.190 14:11:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.190 14:11:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.190 14:11:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.190 14:11:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.190 14:11:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.190 14:11:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.190 14:11:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.190 14:11:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.190 14:11:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.190 14:11:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.190 14:11:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.190 14:11:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.190 14:11:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.190 14:11:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.190 14:11:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.190 14:11:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.190 14:11:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.190 14:11:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.190 14:11:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.190 14:11:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # IFS== 00:06:42.190 14:11:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:42.190 14:11:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:42.190 14:11:33 accel -- accel/accel.sh@75 -- # killprocess 2378258 00:06:42.190 14:11:33 accel -- common/autotest_common.sh@948 -- # '[' -z 2378258 ']' 00:06:42.190 14:11:33 accel -- common/autotest_common.sh@952 -- # kill -0 2378258 00:06:42.190 14:11:33 accel -- common/autotest_common.sh@953 -- # uname 00:06:42.190 14:11:33 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.190 14:11:33 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2378258 00:06:42.190 14:11:33 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.190 14:11:33 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.190 14:11:33 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2378258' 00:06:42.190 killing process with pid 2378258 00:06:42.190 14:11:33 accel -- common/autotest_common.sh@967 -- # kill 2378258 00:06:42.190 14:11:33 accel -- common/autotest_common.sh@972 -- # wait 2378258 00:06:42.448 14:11:34 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:42.448 14:11:34 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:42.448 14:11:34 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:42.448 14:11:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.448 14:11:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.448 14:11:34 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:42.448 14:11:34 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:42.448 14:11:34 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:42.448 14:11:34 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.448 14:11:34 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.448 14:11:34 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.448 14:11:34 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.448 14:11:34 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.448 14:11:34 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:42.448 14:11:34 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:42.448 14:11:34 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.448 14:11:34 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:42.448 14:11:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.448 14:11:34 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:42.448 14:11:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:42.448 14:11:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.448 14:11:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.448 ************************************ 00:06:42.448 START TEST accel_missing_filename 00:06:42.448 ************************************ 00:06:42.448 14:11:34 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:42.448 14:11:34 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:42.448 14:11:34 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:42.448 14:11:34 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:42.448 14:11:34 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.448 14:11:34 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:42.448 14:11:34 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.448 14:11:34 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:42.448 14:11:34 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:42.448 14:11:34 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:42.448 14:11:34 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.448 14:11:34 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.448 14:11:34 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.448 14:11:34 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.448 14:11:34 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.448 14:11:34 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:42.448 14:11:34 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:42.448 [2024-07-12 14:11:34.413481] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:42.448 [2024-07-12 14:11:34.413530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378532 ] 00:06:42.448 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.708 [2024-07-12 14:11:34.468168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.708 [2024-07-12 14:11:34.539763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.708 [2024-07-12 14:11:34.580609] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.708 [2024-07-12 14:11:34.640184] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:42.708 A filename is required. 00:06:42.708 14:11:34 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:42.708 14:11:34 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.708 14:11:34 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:42.708 14:11:34 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:42.709 14:11:34 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:42.709 14:11:34 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.709 00:06:42.709 real 0m0.323s 00:06:42.709 user 0m0.242s 00:06:42.709 sys 0m0.119s 00:06:42.709 14:11:34 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.709 14:11:34 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:42.709 ************************************ 00:06:42.709 END TEST accel_missing_filename 00:06:42.709 ************************************ 00:06:42.967 14:11:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.967 14:11:34 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.967 14:11:34 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:42.967 14:11:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.967 14:11:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.967 ************************************ 00:06:42.967 START TEST accel_compress_verify 00:06:42.967 ************************************ 00:06:42.967 14:11:34 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.967 14:11:34 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:42.967 14:11:34 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.967 14:11:34 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:42.967 14:11:34 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.967 14:11:34 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:42.967 14:11:34 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.967 14:11:34 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.967 14:11:34 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.967 14:11:34 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:42.967 14:11:34 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.967 14:11:34 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.967 14:11:34 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.967 14:11:34 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.967 14:11:34 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.967 14:11:34 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:42.967 14:11:34 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:42.967 [2024-07-12 14:11:34.788567] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:42.967 [2024-07-12 14:11:34.788618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378553 ] 00:06:42.967 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.967 [2024-07-12 14:11:34.842647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.968 [2024-07-12 14:11:34.913826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.968 [2024-07-12 14:11:34.954966] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.227 [2024-07-12 14:11:35.014679] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:43.227 00:06:43.227 Compression does not support the verify option, aborting. 00:06:43.227 14:11:35 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:43.227 14:11:35 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.227 14:11:35 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:43.227 14:11:35 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:43.227 14:11:35 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:43.227 14:11:35 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.227 00:06:43.227 real 0m0.318s 00:06:43.227 user 0m0.245s 00:06:43.227 sys 0m0.112s 00:06:43.227 14:11:35 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.227 14:11:35 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:43.227 ************************************ 00:06:43.227 END TEST accel_compress_verify 00:06:43.227 ************************************ 00:06:43.227 14:11:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.227 14:11:35 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:43.227 14:11:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:43.227 14:11:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.227 14:11:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.227 ************************************ 00:06:43.227 START TEST accel_wrong_workload 00:06:43.227 ************************************ 00:06:43.227 14:11:35 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:43.227 14:11:35 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:43.227 14:11:35 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:43.227 14:11:35 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:43.227 14:11:35 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.227 14:11:35 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:43.227 14:11:35 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.227 14:11:35 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:43.227 14:11:35 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:43.227 14:11:35 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:43.227 14:11:35 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.227 14:11:35 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.227 14:11:35 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.227 14:11:35 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.227 14:11:35 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.227 14:11:35 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:43.227 14:11:35 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:43.227 Unsupported workload type: foobar 00:06:43.227 [2024-07-12 14:11:35.170016] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:43.227 accel_perf options: 00:06:43.227 [-h help message] 00:06:43.227 [-q queue depth per core] 00:06:43.227 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:43.227 [-T number of threads per core 00:06:43.227 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:43.227 [-t time in seconds] 00:06:43.227 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:43.227 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:43.227 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:43.227 [-l for compress/decompress workloads, name of uncompressed input file 00:06:43.227 [-S for crc32c workload, use this seed value (default 0) 00:06:43.227 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:43.227 [-f for fill workload, use this BYTE value (default 255) 00:06:43.227 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:43.227 [-y verify result if this switch is on] 00:06:43.227 [-a tasks to allocate per core (default: same value as -q)] 00:06:43.227 Can be used to spread operations across a wider range of memory. 00:06:43.227 14:11:35 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:43.227 14:11:35 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.227 14:11:35 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:43.227 14:11:35 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.227 00:06:43.227 real 0m0.030s 00:06:43.227 user 0m0.020s 00:06:43.227 sys 0m0.010s 00:06:43.227 14:11:35 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.227 14:11:35 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:43.227 ************************************ 00:06:43.227 END TEST accel_wrong_workload 00:06:43.227 ************************************ 00:06:43.227 Error: writing output failed: Broken pipe 00:06:43.227 14:11:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.227 14:11:35 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:43.227 14:11:35 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:43.227 14:11:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.227 14:11:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.486 ************************************ 00:06:43.486 START TEST accel_negative_buffers 00:06:43.486 ************************************ 00:06:43.486 14:11:35 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:43.486 14:11:35 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:43.486 14:11:35 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:43.486 14:11:35 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:43.486 14:11:35 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.486 14:11:35 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:43.486 14:11:35 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.486 14:11:35 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:43.486 14:11:35 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:43.486 14:11:35 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:43.486 14:11:35 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.486 14:11:35 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.486 14:11:35 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.486 14:11:35 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.486 14:11:35 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.486 14:11:35 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:43.486 14:11:35 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:43.486 -x option must be non-negative. 00:06:43.486 [2024-07-12 14:11:35.257374] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:43.486 accel_perf options: 00:06:43.486 [-h help message] 00:06:43.486 [-q queue depth per core] 00:06:43.486 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:43.486 [-T number of threads per core 00:06:43.486 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:43.486 [-t time in seconds] 00:06:43.486 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:43.486 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:43.486 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:43.486 [-l for compress/decompress workloads, name of uncompressed input file 00:06:43.486 [-S for crc32c workload, use this seed value (default 0) 00:06:43.486 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:43.486 [-f for fill workload, use this BYTE value (default 255) 00:06:43.486 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:43.486 [-y verify result if this switch is on] 00:06:43.486 [-a tasks to allocate per core (default: same value as -q)] 00:06:43.486 Can be used to spread operations across a wider range of memory. 00:06:43.486 14:11:35 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:43.486 14:11:35 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.486 14:11:35 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:43.486 14:11:35 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.486 00:06:43.486 real 0m0.026s 00:06:43.486 user 0m0.019s 00:06:43.486 sys 0m0.007s 00:06:43.486 14:11:35 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.486 14:11:35 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:43.486 ************************************ 00:06:43.486 END TEST accel_negative_buffers 00:06:43.486 ************************************ 00:06:43.486 Error: writing output failed: Broken pipe 00:06:43.486 14:11:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.486 14:11:35 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:43.486 14:11:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:43.486 14:11:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.486 14:11:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.486 ************************************ 00:06:43.486 START TEST accel_crc32c 00:06:43.486 ************************************ 00:06:43.486 14:11:35 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:43.486 14:11:35 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:43.486 14:11:35 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:43.486 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.486 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.486 14:11:35 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:43.486 14:11:35 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:43.486 14:11:35 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:43.486 14:11:35 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.486 14:11:35 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.486 14:11:35 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.486 14:11:35 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.486 14:11:35 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.486 14:11:35 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:43.486 14:11:35 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:43.486 [2024-07-12 14:11:35.343307] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:43.486 [2024-07-12 14:11:35.343375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378835 ] 00:06:43.486 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.486 [2024-07-12 14:11:35.398037] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.486 [2024-07-12 14:11:35.468677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.744 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.745 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.745 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.745 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:43.745 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.745 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.745 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.745 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.745 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.745 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.745 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.745 14:11:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.745 14:11:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.745 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.745 14:11:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:44.680 14:11:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.680 00:06:44.680 real 0m1.329s 00:06:44.680 user 0m1.224s 00:06:44.680 sys 0m0.109s 00:06:44.680 14:11:36 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.680 14:11:36 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:44.680 ************************************ 00:06:44.680 END TEST accel_crc32c 00:06:44.680 ************************************ 00:06:44.680 14:11:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.680 14:11:36 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:44.680 14:11:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:44.680 14:11:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.680 14:11:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.940 ************************************ 00:06:44.940 START TEST accel_crc32c_C2 00:06:44.940 ************************************ 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:44.940 [2024-07-12 14:11:36.728712] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:44.940 [2024-07-12 14:11:36.728759] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379083 ] 00:06:44.940 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.940 [2024-07-12 14:11:36.782881] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.940 [2024-07-12 14:11:36.854281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.940 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.941 14:11:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.319 00:06:46.319 real 0m1.327s 00:06:46.319 user 0m1.220s 00:06:46.319 sys 0m0.112s 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.319 14:11:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:46.319 ************************************ 00:06:46.319 END TEST accel_crc32c_C2 00:06:46.319 ************************************ 00:06:46.319 14:11:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.319 14:11:38 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:46.319 14:11:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:46.319 14:11:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.319 14:11:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.319 ************************************ 00:06:46.319 START TEST accel_copy 00:06:46.319 ************************************ 00:06:46.319 14:11:38 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:46.319 [2024-07-12 14:11:38.112089] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:46.319 [2024-07-12 14:11:38.112156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379329 ] 00:06:46.319 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.319 [2024-07-12 14:11:38.167814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.319 [2024-07-12 14:11:38.239245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.319 14:11:38 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.320 14:11:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:47.697 14:11:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.697 00:06:47.697 real 0m1.329s 00:06:47.697 user 0m1.218s 00:06:47.697 sys 0m0.116s 00:06:47.697 14:11:39 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.697 14:11:39 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.697 ************************************ 00:06:47.697 END TEST accel_copy 00:06:47.697 ************************************ 00:06:47.697 14:11:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.697 14:11:39 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.697 14:11:39 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:47.697 14:11:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.697 14:11:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.697 ************************************ 00:06:47.697 START TEST accel_fill 00:06:47.697 ************************************ 00:06:47.697 14:11:39 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:47.697 [2024-07-12 14:11:39.496678] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:47.697 [2024-07-12 14:11:39.496744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379582 ] 00:06:47.697 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.697 [2024-07-12 14:11:39.552402] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.697 [2024-07-12 14:11:39.629092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.697 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.698 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.698 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.698 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.698 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:47.698 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.698 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.698 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.698 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.698 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.698 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.698 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.698 14:11:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.698 14:11:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.698 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.698 14:11:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.075 14:11:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.075 14:11:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.075 14:11:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.075 14:11:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.075 14:11:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.075 14:11:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:49.076 14:11:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.076 00:06:49.076 real 0m1.334s 00:06:49.076 user 0m1.222s 00:06:49.076 sys 0m0.113s 00:06:49.076 14:11:40 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.076 14:11:40 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:49.076 ************************************ 00:06:49.076 END TEST accel_fill 00:06:49.076 ************************************ 00:06:49.076 14:11:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.076 14:11:40 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:49.076 14:11:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:49.076 14:11:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.076 14:11:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.076 ************************************ 00:06:49.076 START TEST accel_copy_crc32c 00:06:49.076 ************************************ 00:06:49.076 14:11:40 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:49.076 14:11:40 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:49.076 14:11:40 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:49.076 14:11:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:40 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:49.076 14:11:40 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:49.076 14:11:40 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:49.076 14:11:40 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.076 14:11:40 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.076 14:11:40 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.076 14:11:40 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.076 14:11:40 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.076 14:11:40 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:49.076 14:11:40 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:49.076 [2024-07-12 14:11:40.883418] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:49.076 [2024-07-12 14:11:40.883465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379830 ] 00:06:49.076 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.076 [2024-07-12 14:11:40.937249] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.076 [2024-07-12 14:11:41.008891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.076 14:11:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.490 00:06:50.490 real 0m1.326s 00:06:50.490 user 0m1.209s 00:06:50.490 sys 0m0.118s 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.490 14:11:42 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:50.490 ************************************ 00:06:50.490 END TEST accel_copy_crc32c 00:06:50.490 ************************************ 00:06:50.490 14:11:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.490 14:11:42 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:50.490 14:11:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:50.490 14:11:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.490 14:11:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.490 ************************************ 00:06:50.490 START TEST accel_copy_crc32c_C2 00:06:50.490 ************************************ 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:50.490 [2024-07-12 14:11:42.267760] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:50.490 [2024-07-12 14:11:42.267809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380079 ] 00:06:50.490 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.490 [2024-07-12 14:11:42.321922] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.490 [2024-07-12 14:11:42.392919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.490 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.491 14:11:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.869 00:06:51.869 real 0m1.326s 00:06:51.869 user 0m1.218s 00:06:51.869 sys 0m0.111s 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.869 14:11:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:51.869 ************************************ 00:06:51.869 END TEST accel_copy_crc32c_C2 00:06:51.869 ************************************ 00:06:51.869 14:11:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.869 14:11:43 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:51.869 14:11:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:51.869 14:11:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.869 14:11:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.869 ************************************ 00:06:51.869 START TEST accel_dualcast 00:06:51.869 ************************************ 00:06:51.869 14:11:43 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:51.869 [2024-07-12 14:11:43.646066] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:51.869 [2024-07-12 14:11:43.646116] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380330 ] 00:06:51.869 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.869 [2024-07-12 14:11:43.699248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.869 [2024-07-12 14:11:43.770293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.869 14:11:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.870 14:11:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.870 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.870 14:11:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:53.249 14:11:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.249 00:06:53.249 real 0m1.319s 00:06:53.249 user 0m1.213s 00:06:53.249 sys 0m0.108s 00:06:53.249 14:11:44 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.249 14:11:44 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:53.249 ************************************ 00:06:53.249 END TEST accel_dualcast 00:06:53.249 ************************************ 00:06:53.249 14:11:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.249 14:11:44 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:53.249 14:11:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:53.249 14:11:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.249 14:11:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.249 ************************************ 00:06:53.249 START TEST accel_compare 00:06:53.249 ************************************ 00:06:53.249 14:11:45 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:53.249 14:11:45 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:53.249 14:11:45 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:53.249 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.249 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.249 14:11:45 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:53.249 14:11:45 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:53.249 14:11:45 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:53.249 14:11:45 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.249 14:11:45 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.249 14:11:45 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.249 14:11:45 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.249 14:11:45 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.249 14:11:45 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:53.249 14:11:45 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:53.250 [2024-07-12 14:11:45.026477] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:53.250 [2024-07-12 14:11:45.026525] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380579 ] 00:06:53.250 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.250 [2024-07-12 14:11:45.080781] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.250 [2024-07-12 14:11:45.152464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.250 14:11:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.627 14:11:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.627 14:11:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.627 14:11:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:54.628 14:11:46 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.628 00:06:54.628 real 0m1.326s 00:06:54.628 user 0m1.214s 00:06:54.628 sys 0m0.113s 00:06:54.628 14:11:46 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.628 14:11:46 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:54.628 ************************************ 00:06:54.628 END TEST accel_compare 00:06:54.628 ************************************ 00:06:54.628 14:11:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.628 14:11:46 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:54.628 14:11:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:54.628 14:11:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.628 14:11:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.628 ************************************ 00:06:54.628 START TEST accel_xor 00:06:54.628 ************************************ 00:06:54.628 14:11:46 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:54.628 [2024-07-12 14:11:46.410540] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:54.628 [2024-07-12 14:11:46.410587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380825 ] 00:06:54.628 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.628 [2024-07-12 14:11:46.464973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.628 [2024-07-12 14:11:46.536397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.628 14:11:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.010 00:06:56.010 real 0m1.327s 00:06:56.010 user 0m1.220s 00:06:56.010 sys 0m0.108s 00:06:56.010 14:11:47 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.010 14:11:47 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:56.010 ************************************ 00:06:56.010 END TEST accel_xor 00:06:56.010 ************************************ 00:06:56.010 14:11:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.010 14:11:47 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:56.010 14:11:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:56.010 14:11:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.010 14:11:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.010 ************************************ 00:06:56.010 START TEST accel_xor 00:06:56.010 ************************************ 00:06:56.010 14:11:47 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.010 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:56.011 [2024-07-12 14:11:47.794630] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:56.011 [2024-07-12 14:11:47.794698] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381082 ] 00:06:56.011 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.011 [2024-07-12 14:11:47.848855] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.011 [2024-07-12 14:11:47.919464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.011 14:11:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.449 14:11:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.450 14:11:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.450 14:11:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.450 14:11:49 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:57.450 14:11:49 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.450 00:06:57.450 real 0m1.326s 00:06:57.450 user 0m1.223s 00:06:57.450 sys 0m0.106s 00:06:57.450 14:11:49 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.450 14:11:49 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:57.450 ************************************ 00:06:57.450 END TEST accel_xor 00:06:57.450 ************************************ 00:06:57.450 14:11:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.450 14:11:49 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:57.450 14:11:49 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:57.450 14:11:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.450 14:11:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.450 ************************************ 00:06:57.450 START TEST accel_dif_verify 00:06:57.450 ************************************ 00:06:57.450 14:11:49 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:57.450 [2024-07-12 14:11:49.169920] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:57.450 [2024-07-12 14:11:49.169968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381327 ] 00:06:57.450 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.450 [2024-07-12 14:11:49.223857] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.450 [2024-07-12 14:11:49.301004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:57.450 14:11:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:58.825 14:11:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.825 00:06:58.825 real 0m1.331s 00:06:58.825 user 0m1.220s 00:06:58.825 sys 0m0.114s 00:06:58.825 14:11:50 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.825 14:11:50 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:58.825 ************************************ 00:06:58.825 END TEST accel_dif_verify 00:06:58.825 ************************************ 00:06:58.825 14:11:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.825 14:11:50 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:58.825 14:11:50 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:58.825 14:11:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.825 14:11:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.825 ************************************ 00:06:58.825 START TEST accel_dif_generate 00:06:58.825 ************************************ 00:06:58.825 14:11:50 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:58.825 [2024-07-12 14:11:50.560724] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:06:58.825 [2024-07-12 14:11:50.560790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381579 ] 00:06:58.825 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.825 [2024-07-12 14:11:50.616428] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.825 [2024-07-12 14:11:50.687591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.825 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.826 14:11:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:00.204 14:11:51 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.204 00:07:00.204 real 0m1.329s 00:07:00.204 user 0m1.223s 00:07:00.204 sys 0m0.110s 00:07:00.204 14:11:51 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.204 14:11:51 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:00.204 ************************************ 00:07:00.204 END TEST accel_dif_generate 00:07:00.204 ************************************ 00:07:00.204 14:11:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.204 14:11:51 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:00.204 14:11:51 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:00.204 14:11:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.204 14:11:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.204 ************************************ 00:07:00.204 START TEST accel_dif_generate_copy 00:07:00.204 ************************************ 00:07:00.204 14:11:51 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:00.204 14:11:51 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:00.204 14:11:51 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:00.204 14:11:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.204 14:11:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.204 14:11:51 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:00.204 14:11:51 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:00.204 14:11:51 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:00.204 14:11:51 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.204 14:11:51 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.204 14:11:51 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.204 14:11:51 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.204 14:11:51 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.204 14:11:51 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:00.204 14:11:51 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:00.204 [2024-07-12 14:11:51.946893] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:07:00.204 [2024-07-12 14:11:51.946959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381829 ] 00:07:00.204 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.204 [2024-07-12 14:11:52.001743] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.204 [2024-07-12 14:11:52.072999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.204 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:00.204 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.204 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.204 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.204 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.205 14:11:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.583 00:07:01.583 real 0m1.328s 00:07:01.583 user 0m1.218s 00:07:01.583 sys 0m0.112s 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.583 14:11:53 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.583 ************************************ 00:07:01.583 END TEST accel_dif_generate_copy 00:07:01.583 ************************************ 00:07:01.583 14:11:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.583 14:11:53 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:01.583 14:11:53 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.583 14:11:53 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:01.583 14:11:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.583 14:11:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.583 ************************************ 00:07:01.583 START TEST accel_comp 00:07:01.583 ************************************ 00:07:01.583 14:11:53 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.583 14:11:53 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:01.583 14:11:53 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:01.583 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:01.584 [2024-07-12 14:11:53.326356] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:07:01.584 [2024-07-12 14:11:53.326462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382076 ] 00:07:01.584 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.584 [2024-07-12 14:11:53.380297] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.584 [2024-07-12 14:11:53.451707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.584 14:11:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.961 14:11:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.962 14:11:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:02.962 14:11:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.962 00:07:02.962 real 0m1.323s 00:07:02.962 user 0m1.217s 00:07:02.962 sys 0m0.108s 00:07:02.962 14:11:54 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.962 14:11:54 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:02.962 ************************************ 00:07:02.962 END TEST accel_comp 00:07:02.962 ************************************ 00:07:02.962 14:11:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.962 14:11:54 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.962 14:11:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:02.962 14:11:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.962 14:11:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.962 ************************************ 00:07:02.962 START TEST accel_decomp 00:07:02.962 ************************************ 00:07:02.962 14:11:54 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:02.962 [2024-07-12 14:11:54.714944] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:07:02.962 [2024-07-12 14:11:54.715011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382329 ] 00:07:02.962 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.962 [2024-07-12 14:11:54.770473] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.962 [2024-07-12 14:11:54.841802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.962 14:11:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.341 14:11:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.342 14:11:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.342 14:11:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:04.342 14:11:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.342 00:07:04.342 real 0m1.331s 00:07:04.342 user 0m1.221s 00:07:04.342 sys 0m0.112s 00:07:04.342 14:11:56 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.342 14:11:56 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:04.342 ************************************ 00:07:04.342 END TEST accel_decomp 00:07:04.342 ************************************ 00:07:04.342 14:11:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.342 14:11:56 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:04.342 14:11:56 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:04.342 14:11:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.342 14:11:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.342 ************************************ 00:07:04.342 START TEST accel_decomp_full 00:07:04.342 ************************************ 00:07:04.342 14:11:56 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:04.342 [2024-07-12 14:11:56.099969] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:07:04.342 [2024-07-12 14:11:56.100036] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382575 ] 00:07:04.342 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.342 [2024-07-12 14:11:56.154920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.342 [2024-07-12 14:11:56.226838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.342 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.343 14:11:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.343 14:11:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.343 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.343 14:11:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.720 14:11:57 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.720 00:07:05.720 real 0m1.342s 00:07:05.720 user 0m1.228s 00:07:05.720 sys 0m0.116s 00:07:05.720 14:11:57 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.720 14:11:57 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:05.720 ************************************ 00:07:05.720 END TEST accel_decomp_full 00:07:05.720 ************************************ 00:07:05.720 14:11:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.720 14:11:57 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:05.720 14:11:57 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:05.720 14:11:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.720 14:11:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.720 ************************************ 00:07:05.720 START TEST accel_decomp_mcore 00:07:05.720 ************************************ 00:07:05.720 14:11:57 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:05.720 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:05.720 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:05.720 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.720 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.720 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:05.720 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:05.720 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:05.721 [2024-07-12 14:11:57.497191] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:07:05.721 [2024-07-12 14:11:57.497238] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382822 ] 00:07:05.721 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.721 [2024-07-12 14:11:57.550914] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.721 [2024-07-12 14:11:57.624901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.721 [2024-07-12 14:11:57.624997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.721 [2024-07-12 14:11:57.625081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.721 [2024-07-12 14:11:57.625082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.721 14:11:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.100 00:07:07.100 real 0m1.343s 00:07:07.100 user 0m4.559s 00:07:07.100 sys 0m0.125s 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.100 14:11:58 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:07.100 ************************************ 00:07:07.100 END TEST accel_decomp_mcore 00:07:07.100 ************************************ 00:07:07.100 14:11:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.100 14:11:58 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:07.100 14:11:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:07.100 14:11:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.100 14:11:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.100 ************************************ 00:07:07.100 START TEST accel_decomp_full_mcore 00:07:07.100 ************************************ 00:07:07.100 14:11:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:07.100 14:11:58 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:07.100 14:11:58 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:07.100 14:11:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:58 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:07.100 14:11:58 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:07.100 14:11:58 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:07.100 14:11:58 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.100 14:11:58 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.100 14:11:58 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.100 14:11:58 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.100 14:11:58 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.100 14:11:58 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:07.100 14:11:58 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:07.100 [2024-07-12 14:11:58.910473] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:07:07.100 [2024-07-12 14:11:58.910543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383076 ] 00:07:07.100 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.100 [2024-07-12 14:11:58.966279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.100 [2024-07-12 14:11:59.042068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.100 [2024-07-12 14:11:59.042161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.100 [2024-07-12 14:11:59.042251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.100 [2024-07-12 14:11:59.042253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.100 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.101 14:11:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.479 00:07:08.479 real 0m1.362s 00:07:08.479 user 0m4.610s 00:07:08.479 sys 0m0.123s 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.479 14:12:00 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:08.479 ************************************ 00:07:08.479 END TEST accel_decomp_full_mcore 00:07:08.479 ************************************ 00:07:08.479 14:12:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.479 14:12:00 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:08.479 14:12:00 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:08.479 14:12:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.479 14:12:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.479 ************************************ 00:07:08.479 START TEST accel_decomp_mthread 00:07:08.479 ************************************ 00:07:08.479 14:12:00 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:08.479 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:08.479 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:08.479 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.479 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.479 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:08.479 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:08.479 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:08.479 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.479 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.479 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.479 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.479 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.479 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:08.479 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:08.479 [2024-07-12 14:12:00.337352] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:07:08.479 [2024-07-12 14:12:00.337404] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383359 ] 00:07:08.479 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.479 [2024-07-12 14:12:00.392037] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.479 [2024-07-12 14:12:00.465806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.739 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.739 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.739 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.740 14:12:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.678 00:07:09.678 real 0m1.340s 00:07:09.678 user 0m1.235s 00:07:09.678 sys 0m0.118s 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.678 14:12:01 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:09.678 ************************************ 00:07:09.678 END TEST accel_decomp_mthread 00:07:09.678 ************************************ 00:07:09.678 14:12:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.678 14:12:01 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.678 14:12:01 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:09.678 14:12:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.678 14:12:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.937 ************************************ 00:07:09.937 START TEST accel_decomp_full_mthread 00:07:09.937 ************************************ 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:09.937 [2024-07-12 14:12:01.744291] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:07:09.937 [2024-07-12 14:12:01.744339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383686 ] 00:07:09.937 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.937 [2024-07-12 14:12:01.798834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.937 [2024-07-12 14:12:01.871220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:12:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.339 00:07:11.339 real 0m1.362s 00:07:11.339 user 0m1.256s 00:07:11.339 sys 0m0.118s 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.339 14:12:03 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:11.339 ************************************ 00:07:11.339 END TEST accel_decomp_full_mthread 00:07:11.339 ************************************ 00:07:11.339 14:12:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.339 14:12:03 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:11.339 14:12:03 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:11.339 14:12:03 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:11.339 14:12:03 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:11.339 14:12:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.339 14:12:03 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.339 14:12:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.339 14:12:03 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.339 14:12:03 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.339 14:12:03 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.339 14:12:03 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.339 14:12:03 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:11.339 14:12:03 accel -- accel/accel.sh@41 -- # jq -r . 00:07:11.339 ************************************ 00:07:11.339 START TEST accel_dif_functional_tests 00:07:11.339 ************************************ 00:07:11.339 14:12:03 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:11.339 [2024-07-12 14:12:03.194229] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:07:11.339 [2024-07-12 14:12:03.194267] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383958 ] 00:07:11.339 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.339 [2024-07-12 14:12:03.246048] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.339 [2024-07-12 14:12:03.321223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.339 [2024-07-12 14:12:03.321319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.339 [2024-07-12 14:12:03.321321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.601 00:07:11.601 00:07:11.601 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.601 http://cunit.sourceforge.net/ 00:07:11.601 00:07:11.601 00:07:11.601 Suite: accel_dif 00:07:11.601 Test: verify: DIF generated, GUARD check ...passed 00:07:11.601 Test: verify: DIF generated, APPTAG check ...passed 00:07:11.601 Test: verify: DIF generated, REFTAG check ...passed 00:07:11.601 Test: verify: DIF not generated, GUARD check ...[2024-07-12 14:12:03.389568] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:11.601 passed 00:07:11.601 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 14:12:03.389617] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:11.601 passed 00:07:11.602 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 14:12:03.389636] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:11.602 passed 00:07:11.602 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:11.602 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 14:12:03.389679] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:11.602 passed 00:07:11.602 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:11.602 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:11.602 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:11.602 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 14:12:03.389778] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:11.602 passed 00:07:11.602 Test: verify copy: DIF generated, GUARD check ...passed 00:07:11.602 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:11.602 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:11.602 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 14:12:03.389883] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:11.602 passed 00:07:11.602 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 14:12:03.389906] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:11.602 passed 00:07:11.602 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 14:12:03.389928] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:11.602 passed 00:07:11.602 Test: generate copy: DIF generated, GUARD check ...passed 00:07:11.602 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:11.602 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:11.602 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:11.602 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:11.602 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:11.602 Test: generate copy: iovecs-len validate ...[2024-07-12 14:12:03.390096] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:11.602 passed 00:07:11.602 Test: generate copy: buffer alignment validate ...passed 00:07:11.602 00:07:11.602 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.602 suites 1 1 n/a 0 0 00:07:11.602 tests 26 26 26 0 0 00:07:11.602 asserts 115 115 115 0 n/a 00:07:11.602 00:07:11.602 Elapsed time = 0.000 seconds 00:07:11.602 00:07:11.602 real 0m0.412s 00:07:11.602 user 0m0.630s 00:07:11.602 sys 0m0.139s 00:07:11.602 14:12:03 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.602 14:12:03 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:11.602 ************************************ 00:07:11.602 END TEST accel_dif_functional_tests 00:07:11.602 ************************************ 00:07:11.602 14:12:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.602 00:07:11.602 real 0m30.666s 00:07:11.602 user 0m34.450s 00:07:11.602 sys 0m4.073s 00:07:11.602 14:12:03 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.602 14:12:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.602 ************************************ 00:07:11.602 END TEST accel 00:07:11.602 ************************************ 00:07:11.860 14:12:03 -- common/autotest_common.sh@1142 -- # return 0 00:07:11.860 14:12:03 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:11.860 14:12:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.860 14:12:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.860 14:12:03 -- common/autotest_common.sh@10 -- # set +x 00:07:11.860 ************************************ 00:07:11.860 START TEST accel_rpc 00:07:11.860 ************************************ 00:07:11.860 14:12:03 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:11.860 * Looking for test storage... 00:07:11.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:11.860 14:12:03 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:11.860 14:12:03 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2384032 00:07:11.860 14:12:03 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2384032 00:07:11.860 14:12:03 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:11.860 14:12:03 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2384032 ']' 00:07:11.860 14:12:03 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.860 14:12:03 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.860 14:12:03 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.860 14:12:03 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.860 14:12:03 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.860 [2024-07-12 14:12:03.802801] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:07:11.860 [2024-07-12 14:12:03.802851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2384032 ] 00:07:11.860 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.860 [2024-07-12 14:12:03.857872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.118 [2024-07-12 14:12:03.937568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.684 14:12:04 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.684 14:12:04 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:12.684 14:12:04 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:12.684 14:12:04 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:12.684 14:12:04 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:12.684 14:12:04 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:12.684 14:12:04 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:12.684 14:12:04 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:12.684 14:12:04 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.684 14:12:04 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.684 ************************************ 00:07:12.684 START TEST accel_assign_opcode 00:07:12.684 ************************************ 00:07:12.684 14:12:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:12.684 14:12:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:12.684 14:12:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.684 14:12:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:12.684 [2024-07-12 14:12:04.635651] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:12.684 14:12:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.684 14:12:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:12.684 14:12:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.684 14:12:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:12.684 [2024-07-12 14:12:04.643663] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:12.684 14:12:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.684 14:12:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:12.684 14:12:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.684 14:12:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:12.943 14:12:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.943 14:12:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:12.943 14:12:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:12.943 14:12:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:12.943 14:12:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.943 14:12:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:12.943 14:12:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.943 software 00:07:12.943 00:07:12.943 real 0m0.236s 00:07:12.943 user 0m0.043s 00:07:12.943 sys 0m0.012s 00:07:12.943 14:12:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.943 14:12:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:12.943 ************************************ 00:07:12.943 END TEST accel_assign_opcode 00:07:12.943 ************************************ 00:07:12.943 14:12:04 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:12.943 14:12:04 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2384032 00:07:12.943 14:12:04 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2384032 ']' 00:07:12.943 14:12:04 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2384032 00:07:12.943 14:12:04 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:12.943 14:12:04 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:12.943 14:12:04 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2384032 00:07:12.943 14:12:04 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:12.943 14:12:04 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:12.943 14:12:04 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2384032' 00:07:12.943 killing process with pid 2384032 00:07:12.943 14:12:04 accel_rpc -- common/autotest_common.sh@967 -- # kill 2384032 00:07:12.943 14:12:04 accel_rpc -- common/autotest_common.sh@972 -- # wait 2384032 00:07:13.510 00:07:13.510 real 0m1.592s 00:07:13.510 user 0m1.675s 00:07:13.510 sys 0m0.411s 00:07:13.510 14:12:05 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.510 14:12:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.510 ************************************ 00:07:13.510 END TEST accel_rpc 00:07:13.510 ************************************ 00:07:13.510 14:12:05 -- common/autotest_common.sh@1142 -- # return 0 00:07:13.510 14:12:05 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:13.510 14:12:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.510 14:12:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.510 14:12:05 -- common/autotest_common.sh@10 -- # set +x 00:07:13.510 ************************************ 00:07:13.510 START TEST app_cmdline 00:07:13.510 ************************************ 00:07:13.510 14:12:05 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:13.510 * Looking for test storage... 00:07:13.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:13.510 14:12:05 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:13.510 14:12:05 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:13.510 14:12:05 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2384499 00:07:13.510 14:12:05 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2384499 00:07:13.510 14:12:05 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2384499 ']' 00:07:13.510 14:12:05 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.510 14:12:05 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.510 14:12:05 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.510 14:12:05 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.510 14:12:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:13.510 [2024-07-12 14:12:05.438870] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:07:13.510 [2024-07-12 14:12:05.438916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2384499 ] 00:07:13.510 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.510 [2024-07-12 14:12:05.492404] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.769 [2024-07-12 14:12:05.573227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.336 14:12:06 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.336 14:12:06 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:14.336 14:12:06 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:14.595 { 00:07:14.595 "version": "SPDK v24.09-pre git sha1 192cfc373", 00:07:14.595 "fields": { 00:07:14.595 "major": 24, 00:07:14.595 "minor": 9, 00:07:14.595 "patch": 0, 00:07:14.595 "suffix": "-pre", 00:07:14.595 "commit": "192cfc373" 00:07:14.595 } 00:07:14.595 } 00:07:14.595 14:12:06 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:14.595 14:12:06 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:14.595 14:12:06 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:14.595 14:12:06 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:14.595 14:12:06 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:14.595 14:12:06 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.595 14:12:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.595 14:12:06 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:14.595 14:12:06 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:14.595 14:12:06 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.595 14:12:06 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:14.595 14:12:06 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:14.595 14:12:06 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.595 14:12:06 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:14.595 14:12:06 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.595 14:12:06 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.596 14:12:06 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.596 14:12:06 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.596 14:12:06 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.596 14:12:06 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.596 14:12:06 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.596 14:12:06 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.596 14:12:06 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:14.596 14:12:06 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.855 request: 00:07:14.855 { 00:07:14.855 "method": "env_dpdk_get_mem_stats", 00:07:14.855 "req_id": 1 00:07:14.855 } 00:07:14.855 Got JSON-RPC error response 00:07:14.855 response: 00:07:14.855 { 00:07:14.855 "code": -32601, 00:07:14.855 "message": "Method not found" 00:07:14.855 } 00:07:14.855 14:12:06 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:14.855 14:12:06 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.855 14:12:06 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:14.855 14:12:06 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.855 14:12:06 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2384499 00:07:14.855 14:12:06 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2384499 ']' 00:07:14.855 14:12:06 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2384499 00:07:14.855 14:12:06 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:14.855 14:12:06 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.855 14:12:06 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2384499 00:07:14.855 14:12:06 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.855 14:12:06 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.855 14:12:06 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2384499' 00:07:14.855 killing process with pid 2384499 00:07:14.855 14:12:06 app_cmdline -- common/autotest_common.sh@967 -- # kill 2384499 00:07:14.855 14:12:06 app_cmdline -- common/autotest_common.sh@972 -- # wait 2384499 00:07:15.114 00:07:15.114 real 0m1.683s 00:07:15.114 user 0m2.015s 00:07:15.114 sys 0m0.426s 00:07:15.114 14:12:07 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.114 14:12:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:15.114 ************************************ 00:07:15.114 END TEST app_cmdline 00:07:15.114 ************************************ 00:07:15.114 14:12:07 -- common/autotest_common.sh@1142 -- # return 0 00:07:15.114 14:12:07 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:15.114 14:12:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.114 14:12:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.114 14:12:07 -- common/autotest_common.sh@10 -- # set +x 00:07:15.114 ************************************ 00:07:15.114 START TEST version 00:07:15.114 ************************************ 00:07:15.114 14:12:07 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:15.374 * Looking for test storage... 00:07:15.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:15.374 14:12:07 version -- app/version.sh@17 -- # get_header_version major 00:07:15.374 14:12:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:15.374 14:12:07 version -- app/version.sh@14 -- # cut -f2 00:07:15.374 14:12:07 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.374 14:12:07 version -- app/version.sh@17 -- # major=24 00:07:15.374 14:12:07 version -- app/version.sh@18 -- # get_header_version minor 00:07:15.374 14:12:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:15.374 14:12:07 version -- app/version.sh@14 -- # cut -f2 00:07:15.374 14:12:07 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.374 14:12:07 version -- app/version.sh@18 -- # minor=9 00:07:15.374 14:12:07 version -- app/version.sh@19 -- # get_header_version patch 00:07:15.374 14:12:07 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.374 14:12:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:15.374 14:12:07 version -- app/version.sh@14 -- # cut -f2 00:07:15.374 14:12:07 version -- app/version.sh@19 -- # patch=0 00:07:15.374 14:12:07 version -- app/version.sh@20 -- # get_header_version suffix 00:07:15.374 14:12:07 version -- app/version.sh@14 -- # cut -f2 00:07:15.374 14:12:07 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.374 14:12:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:15.374 14:12:07 version -- app/version.sh@20 -- # suffix=-pre 00:07:15.374 14:12:07 version -- app/version.sh@22 -- # version=24.9 00:07:15.374 14:12:07 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:15.374 14:12:07 version -- app/version.sh@28 -- # version=24.9rc0 00:07:15.374 14:12:07 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:15.374 14:12:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:15.374 14:12:07 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:15.374 14:12:07 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:15.374 00:07:15.374 real 0m0.158s 00:07:15.374 user 0m0.087s 00:07:15.374 sys 0m0.103s 00:07:15.374 14:12:07 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.374 14:12:07 version -- common/autotest_common.sh@10 -- # set +x 00:07:15.374 ************************************ 00:07:15.374 END TEST version 00:07:15.374 ************************************ 00:07:15.374 14:12:07 -- common/autotest_common.sh@1142 -- # return 0 00:07:15.374 14:12:07 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:15.374 14:12:07 -- spdk/autotest.sh@198 -- # uname -s 00:07:15.374 14:12:07 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:15.374 14:12:07 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:15.374 14:12:07 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:15.374 14:12:07 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:15.374 14:12:07 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:15.374 14:12:07 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:15.374 14:12:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:15.374 14:12:07 -- common/autotest_common.sh@10 -- # set +x 00:07:15.374 14:12:07 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:15.374 14:12:07 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:15.374 14:12:07 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:15.374 14:12:07 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:15.374 14:12:07 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:15.374 14:12:07 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:15.374 14:12:07 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:15.374 14:12:07 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:15.374 14:12:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.374 14:12:07 -- common/autotest_common.sh@10 -- # set +x 00:07:15.374 ************************************ 00:07:15.374 START TEST nvmf_tcp 00:07:15.374 ************************************ 00:07:15.374 14:12:07 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:15.633 * Looking for test storage... 00:07:15.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:15.633 14:12:07 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:15.633 14:12:07 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:15.633 14:12:07 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.633 14:12:07 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:15.633 14:12:07 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.633 14:12:07 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.633 14:12:07 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.634 14:12:07 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.634 14:12:07 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.634 14:12:07 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.634 14:12:07 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.634 14:12:07 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.634 14:12:07 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.634 14:12:07 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:15.634 14:12:07 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:15.634 14:12:07 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:15.634 14:12:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:15.634 14:12:07 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:15.634 14:12:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:15.634 14:12:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.634 14:12:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.634 ************************************ 00:07:15.634 START TEST nvmf_example 00:07:15.634 ************************************ 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:15.634 * Looking for test storage... 00:07:15.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:15.634 14:12:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:20.907 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:20.907 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.907 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:20.908 Found net devices under 0000:86:00.0: cvl_0_0 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:20.908 Found net devices under 0000:86:00.1: cvl_0_1 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:20.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:07:20.908 00:07:20.908 --- 10.0.0.2 ping statistics --- 00:07:20.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.908 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:20.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:07:20.908 00:07:20.908 --- 10.0.0.1 ping statistics --- 00:07:20.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.908 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2388331 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2388331 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2388331 ']' 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.908 14:12:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:20.908 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:21.843 14:12:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:21.843 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.820 Initializing NVMe Controllers 00:07:31.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:31.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:31.820 Initialization complete. Launching workers. 00:07:31.820 ======================================================== 00:07:31.820 Latency(us) 00:07:31.820 Device Information : IOPS MiB/s Average min max 00:07:31.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18485.12 72.21 3463.06 695.08 16396.53 00:07:31.820 ======================================================== 00:07:31.820 Total : 18485.12 72.21 3463.06 695.08 16396.53 00:07:31.820 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:31.820 rmmod nvme_tcp 00:07:31.820 rmmod nvme_fabrics 00:07:31.820 rmmod nvme_keyring 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2388331 ']' 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2388331 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2388331 ']' 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2388331 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:31.820 14:12:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2388331 00:07:32.080 14:12:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:32.080 14:12:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:32.080 14:12:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2388331' 00:07:32.080 killing process with pid 2388331 00:07:32.080 14:12:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 2388331 00:07:32.080 14:12:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 2388331 00:07:32.080 nvmf threads initialize successfully 00:07:32.080 bdev subsystem init successfully 00:07:32.080 created a nvmf target service 00:07:32.080 create targets's poll groups done 00:07:32.080 all subsystems of target started 00:07:32.080 nvmf target is running 00:07:32.080 all subsystems of target stopped 00:07:32.080 destroy targets's poll groups done 00:07:32.080 destroyed the nvmf target service 00:07:32.080 bdev subsystem finish successfully 00:07:32.080 nvmf threads destroy successfully 00:07:32.080 14:12:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:32.080 14:12:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:32.080 14:12:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:32.080 14:12:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:32.080 14:12:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:32.080 14:12:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.080 14:12:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.080 14:12:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.622 14:12:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:34.622 14:12:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:34.622 14:12:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:34.622 14:12:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:34.622 00:07:34.622 real 0m18.649s 00:07:34.622 user 0m45.230s 00:07:34.622 sys 0m5.210s 00:07:34.622 14:12:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.622 14:12:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:34.622 ************************************ 00:07:34.622 END TEST nvmf_example 00:07:34.622 ************************************ 00:07:34.622 14:12:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:34.622 14:12:26 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:34.623 14:12:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:34.623 14:12:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.623 14:12:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:34.623 ************************************ 00:07:34.623 START TEST nvmf_filesystem 00:07:34.623 ************************************ 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:34.623 * Looking for test storage... 00:07:34.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:34.623 14:12:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:34.623 #define SPDK_CONFIG_H 00:07:34.623 #define SPDK_CONFIG_APPS 1 00:07:34.623 #define SPDK_CONFIG_ARCH native 00:07:34.623 #undef SPDK_CONFIG_ASAN 00:07:34.623 #undef SPDK_CONFIG_AVAHI 00:07:34.623 #undef SPDK_CONFIG_CET 00:07:34.623 #define SPDK_CONFIG_COVERAGE 1 00:07:34.623 #define SPDK_CONFIG_CROSS_PREFIX 00:07:34.623 #undef SPDK_CONFIG_CRYPTO 00:07:34.623 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:34.623 #undef SPDK_CONFIG_CUSTOMOCF 00:07:34.623 #undef SPDK_CONFIG_DAOS 00:07:34.623 #define SPDK_CONFIG_DAOS_DIR 00:07:34.623 #define SPDK_CONFIG_DEBUG 1 00:07:34.623 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:34.624 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:34.624 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:34.624 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:34.624 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:34.624 #undef SPDK_CONFIG_DPDK_UADK 00:07:34.624 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:34.624 #define SPDK_CONFIG_EXAMPLES 1 00:07:34.624 #undef SPDK_CONFIG_FC 00:07:34.624 #define SPDK_CONFIG_FC_PATH 00:07:34.624 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:34.624 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:34.624 #undef SPDK_CONFIG_FUSE 00:07:34.624 #undef SPDK_CONFIG_FUZZER 00:07:34.624 #define SPDK_CONFIG_FUZZER_LIB 00:07:34.624 #undef SPDK_CONFIG_GOLANG 00:07:34.624 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:34.624 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:34.624 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:34.624 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:34.624 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:34.624 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:34.624 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:34.624 #define SPDK_CONFIG_IDXD 1 00:07:34.624 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:34.624 #undef SPDK_CONFIG_IPSEC_MB 00:07:34.624 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:34.624 #define SPDK_CONFIG_ISAL 1 00:07:34.624 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:34.624 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:34.624 #define SPDK_CONFIG_LIBDIR 00:07:34.624 #undef SPDK_CONFIG_LTO 00:07:34.624 #define SPDK_CONFIG_MAX_LCORES 128 00:07:34.624 #define SPDK_CONFIG_NVME_CUSE 1 00:07:34.624 #undef SPDK_CONFIG_OCF 00:07:34.624 #define SPDK_CONFIG_OCF_PATH 00:07:34.624 #define SPDK_CONFIG_OPENSSL_PATH 00:07:34.624 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:34.624 #define SPDK_CONFIG_PGO_DIR 00:07:34.624 #undef SPDK_CONFIG_PGO_USE 00:07:34.624 #define SPDK_CONFIG_PREFIX /usr/local 00:07:34.624 #undef SPDK_CONFIG_RAID5F 00:07:34.624 #undef SPDK_CONFIG_RBD 00:07:34.624 #define SPDK_CONFIG_RDMA 1 00:07:34.624 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:34.624 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:34.624 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:34.624 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:34.624 #define SPDK_CONFIG_SHARED 1 00:07:34.624 #undef SPDK_CONFIG_SMA 00:07:34.624 #define SPDK_CONFIG_TESTS 1 00:07:34.624 #undef SPDK_CONFIG_TSAN 00:07:34.624 #define SPDK_CONFIG_UBLK 1 00:07:34.624 #define SPDK_CONFIG_UBSAN 1 00:07:34.624 #undef SPDK_CONFIG_UNIT_TESTS 00:07:34.624 #undef SPDK_CONFIG_URING 00:07:34.624 #define SPDK_CONFIG_URING_PATH 00:07:34.624 #undef SPDK_CONFIG_URING_ZNS 00:07:34.624 #undef SPDK_CONFIG_USDT 00:07:34.624 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:34.624 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:34.624 #define SPDK_CONFIG_VFIO_USER 1 00:07:34.624 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:34.624 #define SPDK_CONFIG_VHOST 1 00:07:34.624 #define SPDK_CONFIG_VIRTIO 1 00:07:34.624 #undef SPDK_CONFIG_VTUNE 00:07:34.624 #define SPDK_CONFIG_VTUNE_DIR 00:07:34.624 #define SPDK_CONFIG_WERROR 1 00:07:34.624 #define SPDK_CONFIG_WPDK_DIR 00:07:34.624 #undef SPDK_CONFIG_XNVME 00:07:34.624 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:34.624 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:34.625 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2390745 ]] 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2390745 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1707 -- # set_test_storage 2147483648 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.5eM9kB 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.5eM9kB/tests/target /tmp/spdk.5eM9kB 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=190093017088 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974303744 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5881286656 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97931513856 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185485824 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986756608 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987153920 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=397312 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:34.626 * Looking for test storage... 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=190093017088 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8095879168 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1709 -- # set -o errtrace 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1710 -- # shopt -s extdebug 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1711 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1713 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1714 -- # true 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1716 -- # xtrace_fd 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:34.626 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:34.627 14:12:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:39.907 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:39.907 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.907 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:39.908 Found net devices under 0000:86:00.0: cvl_0_0 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:39.908 Found net devices under 0000:86:00.1: cvl_0_1 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:39.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:07:39.908 00:07:39.908 --- 10.0.0.2 ping statistics --- 00:07:39.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.908 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:07:39.908 00:07:39.908 --- 10.0.0.1 ping statistics --- 00:07:39.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.908 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.908 ************************************ 00:07:39.908 START TEST nvmf_filesystem_no_in_capsule 00:07:39.908 ************************************ 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2393780 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2393780 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2393780 ']' 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.908 14:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.908 [2024-07-12 14:12:31.887740] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:07:39.908 [2024-07-12 14:12:31.887782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.908 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.168 [2024-07-12 14:12:31.947077] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.168 [2024-07-12 14:12:32.023339] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.168 [2024-07-12 14:12:32.023382] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.168 [2024-07-12 14:12:32.023389] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.168 [2024-07-12 14:12:32.023395] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.168 [2024-07-12 14:12:32.023400] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.168 [2024-07-12 14:12:32.023491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.168 [2024-07-12 14:12:32.023507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.168 [2024-07-12 14:12:32.023597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.168 [2024-07-12 14:12:32.023598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.739 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.739 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:40.739 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:40.739 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:40.739 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.739 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.739 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:40.739 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:40.739 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.739 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.739 [2024-07-12 14:12:32.734458] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.739 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.739 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:40.739 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.739 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.036 Malloc1 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.036 [2024-07-12 14:12:32.877630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.036 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.037 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.037 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:41.037 { 00:07:41.037 "name": "Malloc1", 00:07:41.037 "aliases": [ 00:07:41.037 "cb2aa81d-ab8c-4b0e-b62d-b6024562f6c2" 00:07:41.037 ], 00:07:41.037 "product_name": "Malloc disk", 00:07:41.037 "block_size": 512, 00:07:41.037 "num_blocks": 1048576, 00:07:41.037 "uuid": "cb2aa81d-ab8c-4b0e-b62d-b6024562f6c2", 00:07:41.037 "assigned_rate_limits": { 00:07:41.037 "rw_ios_per_sec": 0, 00:07:41.037 "rw_mbytes_per_sec": 0, 00:07:41.037 "r_mbytes_per_sec": 0, 00:07:41.037 "w_mbytes_per_sec": 0 00:07:41.037 }, 00:07:41.037 "claimed": true, 00:07:41.037 "claim_type": "exclusive_write", 00:07:41.037 "zoned": false, 00:07:41.037 "supported_io_types": { 00:07:41.037 "read": true, 00:07:41.037 "write": true, 00:07:41.037 "unmap": true, 00:07:41.037 "flush": true, 00:07:41.037 "reset": true, 00:07:41.037 "nvme_admin": false, 00:07:41.037 "nvme_io": false, 00:07:41.037 "nvme_io_md": false, 00:07:41.037 "write_zeroes": true, 00:07:41.037 "zcopy": true, 00:07:41.037 "get_zone_info": false, 00:07:41.037 "zone_management": false, 00:07:41.037 "zone_append": false, 00:07:41.037 "compare": false, 00:07:41.037 "compare_and_write": false, 00:07:41.037 "abort": true, 00:07:41.037 "seek_hole": false, 00:07:41.037 "seek_data": false, 00:07:41.037 "copy": true, 00:07:41.037 "nvme_iov_md": false 00:07:41.037 }, 00:07:41.037 "memory_domains": [ 00:07:41.037 { 00:07:41.037 "dma_device_id": "system", 00:07:41.037 "dma_device_type": 1 00:07:41.037 }, 00:07:41.037 { 00:07:41.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.037 "dma_device_type": 2 00:07:41.037 } 00:07:41.037 ], 00:07:41.037 "driver_specific": {} 00:07:41.037 } 00:07:41.037 ]' 00:07:41.037 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:41.037 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:41.037 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:41.037 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:41.037 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:41.037 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:41.037 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:41.037 14:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:42.415 14:12:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:42.415 14:12:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:42.415 14:12:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:42.415 14:12:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:42.415 14:12:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:44.320 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:44.579 14:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:45.516 14:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:45.516 14:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:45.516 14:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:45.516 14:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.516 14:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.516 ************************************ 00:07:45.516 START TEST filesystem_ext4 00:07:45.516 ************************************ 00:07:45.516 14:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:45.516 14:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:45.516 14:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:45.516 14:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:45.516 14:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:45.516 14:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:45.516 14:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:45.516 14:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:45.516 14:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:45.516 14:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:45.516 14:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:45.516 mke2fs 1.46.5 (30-Dec-2021) 00:07:45.516 Discarding device blocks: 0/522240 done 00:07:45.516 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:45.516 Filesystem UUID: ba2b51ce-032b-4b8a-957d-008b88159eb9 00:07:45.516 Superblock backups stored on blocks: 00:07:45.516 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:45.516 00:07:45.516 Allocating group tables: 0/64 done 00:07:45.516 Writing inode tables: 0/64 done 00:07:46.453 Creating journal (8192 blocks): done 00:07:46.453 Writing superblocks and filesystem accounting information: 0/64 done 00:07:46.453 00:07:46.453 14:12:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:46.453 14:12:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.390 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.390 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:47.390 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.390 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:47.390 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:47.390 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.390 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2393780 00:07:47.390 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.390 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.390 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.390 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.650 00:07:47.650 real 0m2.011s 00:07:47.650 user 0m0.027s 00:07:47.650 sys 0m0.065s 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:47.650 ************************************ 00:07:47.650 END TEST filesystem_ext4 00:07:47.650 ************************************ 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.650 ************************************ 00:07:47.650 START TEST filesystem_btrfs 00:07:47.650 ************************************ 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:47.650 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:48.219 btrfs-progs v6.6.2 00:07:48.219 See https://btrfs.readthedocs.io for more information. 00:07:48.219 00:07:48.219 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:48.219 NOTE: several default settings have changed in version 5.15, please make sure 00:07:48.219 this does not affect your deployments: 00:07:48.219 - DUP for metadata (-m dup) 00:07:48.219 - enabled no-holes (-O no-holes) 00:07:48.219 - enabled free-space-tree (-R free-space-tree) 00:07:48.219 00:07:48.219 Label: (null) 00:07:48.219 UUID: d1fcdc40-64cf-45b2-9aa9-634680bbbdc3 00:07:48.219 Node size: 16384 00:07:48.219 Sector size: 4096 00:07:48.219 Filesystem size: 510.00MiB 00:07:48.219 Block group profiles: 00:07:48.219 Data: single 8.00MiB 00:07:48.219 Metadata: DUP 32.00MiB 00:07:48.219 System: DUP 8.00MiB 00:07:48.219 SSD detected: yes 00:07:48.219 Zoned device: no 00:07:48.219 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:48.219 Runtime features: free-space-tree 00:07:48.219 Checksum: crc32c 00:07:48.219 Number of devices: 1 00:07:48.219 Devices: 00:07:48.219 ID SIZE PATH 00:07:48.219 1 510.00MiB /dev/nvme0n1p1 00:07:48.219 00:07:48.219 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:48.219 14:12:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:48.787 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:48.787 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:48.787 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2393780 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:49.047 00:07:49.047 real 0m1.368s 00:07:49.047 user 0m0.030s 00:07:49.047 sys 0m0.118s 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:49.047 ************************************ 00:07:49.047 END TEST filesystem_btrfs 00:07:49.047 ************************************ 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.047 ************************************ 00:07:49.047 START TEST filesystem_xfs 00:07:49.047 ************************************ 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:49.047 14:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:49.047 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:49.047 = sectsz=512 attr=2, projid32bit=1 00:07:49.047 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:49.047 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:49.047 data = bsize=4096 blocks=130560, imaxpct=25 00:07:49.047 = sunit=0 swidth=0 blks 00:07:49.047 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:49.047 log =internal log bsize=4096 blocks=16384, version=2 00:07:49.047 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:49.047 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:49.983 Discarding blocks...Done. 00:07:49.983 14:12:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:50.247 14:12:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2393780 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:52.151 00:07:52.151 real 0m2.970s 00:07:52.151 user 0m0.018s 00:07:52.151 sys 0m0.076s 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:52.151 ************************************ 00:07:52.151 END TEST filesystem_xfs 00:07:52.151 ************************************ 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:52.151 14:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:52.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2393780 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2393780 ']' 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2393780 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.151 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2393780 00:07:52.410 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:52.410 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:52.410 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2393780' 00:07:52.410 killing process with pid 2393780 00:07:52.410 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2393780 00:07:52.410 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2393780 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:52.669 00:07:52.669 real 0m12.679s 00:07:52.669 user 0m49.835s 00:07:52.669 sys 0m1.207s 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.669 ************************************ 00:07:52.669 END TEST nvmf_filesystem_no_in_capsule 00:07:52.669 ************************************ 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.669 ************************************ 00:07:52.669 START TEST nvmf_filesystem_in_capsule 00:07:52.669 ************************************ 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2396085 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2396085 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2396085 ']' 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.669 14:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.669 [2024-07-12 14:12:44.636117] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:07:52.669 [2024-07-12 14:12:44.636156] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.669 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.927 [2024-07-12 14:12:44.697594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.928 [2024-07-12 14:12:44.772058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.928 [2024-07-12 14:12:44.772097] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.928 [2024-07-12 14:12:44.772104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.928 [2024-07-12 14:12:44.772109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.928 [2024-07-12 14:12:44.772114] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.928 [2024-07-12 14:12:44.772159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.928 [2024-07-12 14:12:44.772269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.928 [2024-07-12 14:12:44.772353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.928 [2024-07-12 14:12:44.772355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.494 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.494 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:53.494 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:53.494 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:53.494 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.494 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.494 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:53.494 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:53.494 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.494 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.494 [2024-07-12 14:12:45.489265] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.494 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.494 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:53.494 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.494 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.752 Malloc1 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.752 [2024-07-12 14:12:45.637114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:53.752 { 00:07:53.752 "name": "Malloc1", 00:07:53.752 "aliases": [ 00:07:53.752 "0d4463a7-95bc-469c-ae89-7d2a9bacdf98" 00:07:53.752 ], 00:07:53.752 "product_name": "Malloc disk", 00:07:53.752 "block_size": 512, 00:07:53.752 "num_blocks": 1048576, 00:07:53.752 "uuid": "0d4463a7-95bc-469c-ae89-7d2a9bacdf98", 00:07:53.752 "assigned_rate_limits": { 00:07:53.752 "rw_ios_per_sec": 0, 00:07:53.752 "rw_mbytes_per_sec": 0, 00:07:53.752 "r_mbytes_per_sec": 0, 00:07:53.752 "w_mbytes_per_sec": 0 00:07:53.752 }, 00:07:53.752 "claimed": true, 00:07:53.752 "claim_type": "exclusive_write", 00:07:53.752 "zoned": false, 00:07:53.752 "supported_io_types": { 00:07:53.752 "read": true, 00:07:53.752 "write": true, 00:07:53.752 "unmap": true, 00:07:53.752 "flush": true, 00:07:53.752 "reset": true, 00:07:53.752 "nvme_admin": false, 00:07:53.752 "nvme_io": false, 00:07:53.752 "nvme_io_md": false, 00:07:53.752 "write_zeroes": true, 00:07:53.752 "zcopy": true, 00:07:53.752 "get_zone_info": false, 00:07:53.752 "zone_management": false, 00:07:53.752 "zone_append": false, 00:07:53.752 "compare": false, 00:07:53.752 "compare_and_write": false, 00:07:53.752 "abort": true, 00:07:53.752 "seek_hole": false, 00:07:53.752 "seek_data": false, 00:07:53.752 "copy": true, 00:07:53.752 "nvme_iov_md": false 00:07:53.752 }, 00:07:53.752 "memory_domains": [ 00:07:53.752 { 00:07:53.752 "dma_device_id": "system", 00:07:53.752 "dma_device_type": 1 00:07:53.752 }, 00:07:53.752 { 00:07:53.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.752 "dma_device_type": 2 00:07:53.752 } 00:07:53.752 ], 00:07:53.752 "driver_specific": {} 00:07:53.752 } 00:07:53.752 ]' 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:53.752 14:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:55.126 14:12:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:55.126 14:12:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:55.126 14:12:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:55.126 14:12:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:55.126 14:12:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:57.054 14:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:57.313 14:12:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:57.572 14:12:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:58.532 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:58.532 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:58.532 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:58.532 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.532 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.791 ************************************ 00:07:58.791 START TEST filesystem_in_capsule_ext4 00:07:58.791 ************************************ 00:07:58.791 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:58.791 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:58.791 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:58.791 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:58.791 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:58.791 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:58.791 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:58.791 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:58.791 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:58.791 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:58.791 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:58.791 mke2fs 1.46.5 (30-Dec-2021) 00:07:58.791 Discarding device blocks: 0/522240 done 00:07:58.791 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:58.791 Filesystem UUID: 1af873a3-a7eb-4f15-8a81-f7c8997a2139 00:07:58.791 Superblock backups stored on blocks: 00:07:58.791 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:58.791 00:07:58.791 Allocating group tables: 0/64 done 00:07:58.791 Writing inode tables: 0/64 done 00:07:59.050 Creating journal (8192 blocks): done 00:07:59.050 Writing superblocks and filesystem accounting information: 0/64 done 00:07:59.050 00:07:59.050 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:59.050 14:12:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2396085 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.987 00:07:59.987 real 0m1.272s 00:07:59.987 user 0m0.032s 00:07:59.987 sys 0m0.059s 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:59.987 ************************************ 00:07:59.987 END TEST filesystem_in_capsule_ext4 00:07:59.987 ************************************ 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.987 ************************************ 00:07:59.987 START TEST filesystem_in_capsule_btrfs 00:07:59.987 ************************************ 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:59.987 14:12:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:00.246 btrfs-progs v6.6.2 00:08:00.246 See https://btrfs.readthedocs.io for more information. 00:08:00.246 00:08:00.246 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:00.246 NOTE: several default settings have changed in version 5.15, please make sure 00:08:00.246 this does not affect your deployments: 00:08:00.246 - DUP for metadata (-m dup) 00:08:00.246 - enabled no-holes (-O no-holes) 00:08:00.246 - enabled free-space-tree (-R free-space-tree) 00:08:00.246 00:08:00.246 Label: (null) 00:08:00.246 UUID: b1b17932-5b08-4eb9-ade5-831e9b6cb7dc 00:08:00.246 Node size: 16384 00:08:00.246 Sector size: 4096 00:08:00.246 Filesystem size: 510.00MiB 00:08:00.246 Block group profiles: 00:08:00.246 Data: single 8.00MiB 00:08:00.246 Metadata: DUP 32.00MiB 00:08:00.246 System: DUP 8.00MiB 00:08:00.246 SSD detected: yes 00:08:00.246 Zoned device: no 00:08:00.246 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:00.246 Runtime features: free-space-tree 00:08:00.246 Checksum: crc32c 00:08:00.246 Number of devices: 1 00:08:00.246 Devices: 00:08:00.246 ID SIZE PATH 00:08:00.247 1 510.00MiB /dev/nvme0n1p1 00:08:00.247 00:08:00.247 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:00.247 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:00.817 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:00.817 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:00.817 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:00.817 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:00.817 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:00.817 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:00.817 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2396085 00:08:00.817 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:00.817 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:00.817 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:00.817 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:00.817 00:08:00.817 real 0m0.799s 00:08:00.817 user 0m0.030s 00:08:00.817 sys 0m0.122s 00:08:00.817 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.817 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:00.817 ************************************ 00:08:00.817 END TEST filesystem_in_capsule_btrfs 00:08:00.817 ************************************ 00:08:00.817 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:00.818 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:00.818 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:00.818 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.818 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.818 ************************************ 00:08:00.818 START TEST filesystem_in_capsule_xfs 00:08:00.818 ************************************ 00:08:00.818 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:00.818 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:00.818 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:00.818 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:00.818 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:00.818 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:00.818 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:00.818 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:00.818 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:00.818 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:00.818 14:12:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:01.114 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:01.114 = sectsz=512 attr=2, projid32bit=1 00:08:01.114 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:01.114 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:01.114 data = bsize=4096 blocks=130560, imaxpct=25 00:08:01.114 = sunit=0 swidth=0 blks 00:08:01.114 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:01.114 log =internal log bsize=4096 blocks=16384, version=2 00:08:01.114 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:01.114 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:01.681 Discarding blocks...Done. 00:08:01.681 14:12:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:01.681 14:12:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:03.585 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:03.585 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:03.585 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:03.585 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:03.585 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:03.585 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:03.585 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2396085 00:08:03.585 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:03.585 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:03.585 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:03.585 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:03.585 00:08:03.585 real 0m2.677s 00:08:03.585 user 0m0.021s 00:08:03.585 sys 0m0.072s 00:08:03.585 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.585 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:03.585 ************************************ 00:08:03.585 END TEST filesystem_in_capsule_xfs 00:08:03.585 ************************************ 00:08:03.585 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:03.585 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:03.844 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:03.844 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:03.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.844 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:03.844 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:03.844 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:03.844 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.844 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:03.844 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2396085 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2396085 ']' 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2396085 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2396085 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2396085' 00:08:04.103 killing process with pid 2396085 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2396085 00:08:04.103 14:12:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2396085 00:08:04.362 14:12:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:04.362 00:08:04.362 real 0m11.681s 00:08:04.362 user 0m45.882s 00:08:04.362 sys 0m1.171s 00:08:04.362 14:12:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.362 14:12:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.362 ************************************ 00:08:04.362 END TEST nvmf_filesystem_in_capsule 00:08:04.362 ************************************ 00:08:04.362 14:12:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:04.362 14:12:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:04.362 14:12:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:04.362 14:12:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:04.362 14:12:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:04.362 14:12:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:04.362 14:12:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:04.362 14:12:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:04.362 rmmod nvme_tcp 00:08:04.362 rmmod nvme_fabrics 00:08:04.362 rmmod nvme_keyring 00:08:04.363 14:12:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:04.363 14:12:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:04.363 14:12:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:04.363 14:12:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:04.363 14:12:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:04.363 14:12:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:04.363 14:12:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:04.363 14:12:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:04.363 14:12:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:04.363 14:12:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.363 14:12:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:04.363 14:12:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.898 14:12:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:06.898 00:08:06.898 real 0m32.250s 00:08:06.898 user 1m37.387s 00:08:06.898 sys 0m6.536s 00:08:06.898 14:12:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.898 14:12:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.898 ************************************ 00:08:06.898 END TEST nvmf_filesystem 00:08:06.898 ************************************ 00:08:06.898 14:12:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:06.898 14:12:58 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:06.898 14:12:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:06.898 14:12:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.898 14:12:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:06.898 ************************************ 00:08:06.898 START TEST nvmf_target_discovery 00:08:06.898 ************************************ 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:06.898 * Looking for test storage... 00:08:06.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.898 14:12:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:06.899 14:12:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:12.174 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:12.174 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:12.174 Found net devices under 0000:86:00.0: cvl_0_0 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:12.174 Found net devices under 0000:86:00.1: cvl_0_1 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.174 14:13:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.174 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.174 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.174 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:12.174 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.174 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.174 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.174 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:12.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:08:12.174 00:08:12.174 --- 10.0.0.2 ping statistics --- 00:08:12.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.174 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:08:12.174 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:08:12.174 00:08:12.174 --- 10.0.0.1 ping statistics --- 00:08:12.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.174 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:12.174 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.174 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:12.174 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:12.174 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.174 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:12.174 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:12.174 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.175 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:12.175 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:12.175 14:13:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:12.175 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:12.175 14:13:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:12.175 14:13:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.175 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2401662 00:08:12.175 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:12.175 14:13:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2401662 00:08:12.175 14:13:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2401662 ']' 00:08:12.175 14:13:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.175 14:13:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:12.175 14:13:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.175 14:13:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:12.175 14:13:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.434 [2024-07-12 14:13:04.214450] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:08:12.434 [2024-07-12 14:13:04.214498] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.434 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.434 [2024-07-12 14:13:04.273628] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.434 [2024-07-12 14:13:04.354335] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.434 [2024-07-12 14:13:04.354371] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.434 [2024-07-12 14:13:04.354381] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.434 [2024-07-12 14:13:04.354387] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.434 [2024-07-12 14:13:04.354393] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.434 [2024-07-12 14:13:04.354432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.434 [2024-07-12 14:13:04.354533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.434 [2024-07-12 14:13:04.354618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.434 [2024-07-12 14:13:04.354619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.371 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:13.371 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:13.371 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:13.371 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:13.371 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.371 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.371 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:13.371 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.371 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.371 [2024-07-12 14:13:05.073228] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 Null1 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 [2024-07-12 14:13:05.118703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 Null2 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 Null3 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 Null4 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:08:13.372 00:08:13.372 Discovery Log Number of Records 6, Generation counter 6 00:08:13.372 =====Discovery Log Entry 0====== 00:08:13.372 trtype: tcp 00:08:13.372 adrfam: ipv4 00:08:13.372 subtype: current discovery subsystem 00:08:13.372 treq: not required 00:08:13.372 portid: 0 00:08:13.372 trsvcid: 4420 00:08:13.372 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:13.372 traddr: 10.0.0.2 00:08:13.372 eflags: explicit discovery connections, duplicate discovery information 00:08:13.372 sectype: none 00:08:13.372 =====Discovery Log Entry 1====== 00:08:13.372 trtype: tcp 00:08:13.372 adrfam: ipv4 00:08:13.372 subtype: nvme subsystem 00:08:13.372 treq: not required 00:08:13.372 portid: 0 00:08:13.372 trsvcid: 4420 00:08:13.372 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:13.372 traddr: 10.0.0.2 00:08:13.372 eflags: none 00:08:13.372 sectype: none 00:08:13.372 =====Discovery Log Entry 2====== 00:08:13.372 trtype: tcp 00:08:13.372 adrfam: ipv4 00:08:13.372 subtype: nvme subsystem 00:08:13.372 treq: not required 00:08:13.372 portid: 0 00:08:13.372 trsvcid: 4420 00:08:13.372 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:13.372 traddr: 10.0.0.2 00:08:13.372 eflags: none 00:08:13.372 sectype: none 00:08:13.372 =====Discovery Log Entry 3====== 00:08:13.372 trtype: tcp 00:08:13.372 adrfam: ipv4 00:08:13.372 subtype: nvme subsystem 00:08:13.372 treq: not required 00:08:13.372 portid: 0 00:08:13.372 trsvcid: 4420 00:08:13.372 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:13.372 traddr: 10.0.0.2 00:08:13.372 eflags: none 00:08:13.372 sectype: none 00:08:13.372 =====Discovery Log Entry 4====== 00:08:13.372 trtype: tcp 00:08:13.372 adrfam: ipv4 00:08:13.372 subtype: nvme subsystem 00:08:13.372 treq: not required 00:08:13.372 portid: 0 00:08:13.372 trsvcid: 4420 00:08:13.372 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:13.372 traddr: 10.0.0.2 00:08:13.372 eflags: none 00:08:13.372 sectype: none 00:08:13.372 =====Discovery Log Entry 5====== 00:08:13.372 trtype: tcp 00:08:13.372 adrfam: ipv4 00:08:13.372 subtype: discovery subsystem referral 00:08:13.372 treq: not required 00:08:13.372 portid: 0 00:08:13.372 trsvcid: 4430 00:08:13.372 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:13.372 traddr: 10.0.0.2 00:08:13.372 eflags: none 00:08:13.372 sectype: none 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:13.372 Perform nvmf subsystem discovery via RPC 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.372 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.373 [ 00:08:13.373 { 00:08:13.373 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:13.373 "subtype": "Discovery", 00:08:13.373 "listen_addresses": [ 00:08:13.373 { 00:08:13.373 "trtype": "TCP", 00:08:13.373 "adrfam": "IPv4", 00:08:13.373 "traddr": "10.0.0.2", 00:08:13.373 "trsvcid": "4420" 00:08:13.373 } 00:08:13.373 ], 00:08:13.373 "allow_any_host": true, 00:08:13.373 "hosts": [] 00:08:13.373 }, 00:08:13.373 { 00:08:13.373 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:13.373 "subtype": "NVMe", 00:08:13.373 "listen_addresses": [ 00:08:13.373 { 00:08:13.373 "trtype": "TCP", 00:08:13.373 "adrfam": "IPv4", 00:08:13.373 "traddr": "10.0.0.2", 00:08:13.373 "trsvcid": "4420" 00:08:13.373 } 00:08:13.373 ], 00:08:13.373 "allow_any_host": true, 00:08:13.373 "hosts": [], 00:08:13.373 "serial_number": "SPDK00000000000001", 00:08:13.373 "model_number": "SPDK bdev Controller", 00:08:13.373 "max_namespaces": 32, 00:08:13.373 "min_cntlid": 1, 00:08:13.373 "max_cntlid": 65519, 00:08:13.373 "namespaces": [ 00:08:13.373 { 00:08:13.373 "nsid": 1, 00:08:13.373 "bdev_name": "Null1", 00:08:13.373 "name": "Null1", 00:08:13.373 "nguid": "2F02CC1E6D2D4D66B1E5D16CB420EAAB", 00:08:13.373 "uuid": "2f02cc1e-6d2d-4d66-b1e5-d16cb420eaab" 00:08:13.373 } 00:08:13.373 ] 00:08:13.373 }, 00:08:13.373 { 00:08:13.373 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:13.373 "subtype": "NVMe", 00:08:13.373 "listen_addresses": [ 00:08:13.373 { 00:08:13.373 "trtype": "TCP", 00:08:13.373 "adrfam": "IPv4", 00:08:13.373 "traddr": "10.0.0.2", 00:08:13.373 "trsvcid": "4420" 00:08:13.373 } 00:08:13.373 ], 00:08:13.373 "allow_any_host": true, 00:08:13.373 "hosts": [], 00:08:13.373 "serial_number": "SPDK00000000000002", 00:08:13.373 "model_number": "SPDK bdev Controller", 00:08:13.373 "max_namespaces": 32, 00:08:13.373 "min_cntlid": 1, 00:08:13.373 "max_cntlid": 65519, 00:08:13.373 "namespaces": [ 00:08:13.373 { 00:08:13.373 "nsid": 1, 00:08:13.373 "bdev_name": "Null2", 00:08:13.373 "name": "Null2", 00:08:13.373 "nguid": "38E8CF77DD8B4850A7656FF64DACC1C4", 00:08:13.373 "uuid": "38e8cf77-dd8b-4850-a765-6ff64dacc1c4" 00:08:13.373 } 00:08:13.373 ] 00:08:13.373 }, 00:08:13.373 { 00:08:13.373 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:13.373 "subtype": "NVMe", 00:08:13.373 "listen_addresses": [ 00:08:13.373 { 00:08:13.373 "trtype": "TCP", 00:08:13.373 "adrfam": "IPv4", 00:08:13.373 "traddr": "10.0.0.2", 00:08:13.373 "trsvcid": "4420" 00:08:13.373 } 00:08:13.373 ], 00:08:13.373 "allow_any_host": true, 00:08:13.373 "hosts": [], 00:08:13.373 "serial_number": "SPDK00000000000003", 00:08:13.373 "model_number": "SPDK bdev Controller", 00:08:13.373 "max_namespaces": 32, 00:08:13.373 "min_cntlid": 1, 00:08:13.373 "max_cntlid": 65519, 00:08:13.373 "namespaces": [ 00:08:13.373 { 00:08:13.373 "nsid": 1, 00:08:13.373 "bdev_name": "Null3", 00:08:13.373 "name": "Null3", 00:08:13.373 "nguid": "3ED4C704F20641F89E8A7946236AF5DE", 00:08:13.373 "uuid": "3ed4c704-f206-41f8-9e8a-7946236af5de" 00:08:13.373 } 00:08:13.373 ] 00:08:13.373 }, 00:08:13.373 { 00:08:13.373 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:13.373 "subtype": "NVMe", 00:08:13.373 "listen_addresses": [ 00:08:13.373 { 00:08:13.373 "trtype": "TCP", 00:08:13.373 "adrfam": "IPv4", 00:08:13.373 "traddr": "10.0.0.2", 00:08:13.373 "trsvcid": "4420" 00:08:13.373 } 00:08:13.373 ], 00:08:13.373 "allow_any_host": true, 00:08:13.373 "hosts": [], 00:08:13.373 "serial_number": "SPDK00000000000004", 00:08:13.373 "model_number": "SPDK bdev Controller", 00:08:13.373 "max_namespaces": 32, 00:08:13.373 "min_cntlid": 1, 00:08:13.373 "max_cntlid": 65519, 00:08:13.373 "namespaces": [ 00:08:13.373 { 00:08:13.373 "nsid": 1, 00:08:13.373 "bdev_name": "Null4", 00:08:13.373 "name": "Null4", 00:08:13.373 "nguid": "FC50AE7EDD2149249B6AF5A617B3FA0A", 00:08:13.373 "uuid": "fc50ae7e-dd21-4924-9b6a-f5a617b3fa0a" 00:08:13.373 } 00:08:13.373 ] 00:08:13.373 } 00:08:13.373 ] 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.373 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:13.633 rmmod nvme_tcp 00:08:13.633 rmmod nvme_fabrics 00:08:13.633 rmmod nvme_keyring 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2401662 ']' 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2401662 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2401662 ']' 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2401662 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2401662 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2401662' 00:08:13.633 killing process with pid 2401662 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2401662 00:08:13.633 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2401662 00:08:13.892 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:13.892 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:13.892 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:13.892 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:13.892 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:13.892 14:13:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.892 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.892 14:13:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.431 14:13:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:16.431 00:08:16.431 real 0m9.309s 00:08:16.431 user 0m7.242s 00:08:16.431 sys 0m4.572s 00:08:16.431 14:13:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.431 14:13:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.431 ************************************ 00:08:16.431 END TEST nvmf_target_discovery 00:08:16.431 ************************************ 00:08:16.431 14:13:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:16.431 14:13:07 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:16.431 14:13:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:16.431 14:13:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.431 14:13:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.431 ************************************ 00:08:16.431 START TEST nvmf_referrals 00:08:16.431 ************************************ 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:16.431 * Looking for test storage... 00:08:16.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:16.431 14:13:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.431 14:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.432 14:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.432 14:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:16.432 14:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:16.432 14:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:16.432 14:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:21.709 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.709 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:21.710 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:21.710 Found net devices under 0000:86:00.0: cvl_0_0 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:21.710 Found net devices under 0000:86:00.1: cvl_0_1 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:08:21.710 00:08:21.710 --- 10.0.0.2 ping statistics --- 00:08:21.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.710 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:08:21.710 00:08:21.710 --- 10.0.0.1 ping statistics --- 00:08:21.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.710 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2405270 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2405270 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2405270 ']' 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.710 14:13:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:21.710 [2024-07-12 14:13:12.931974] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:08:21.710 [2024-07-12 14:13:12.932020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.710 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.710 [2024-07-12 14:13:12.989111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.710 [2024-07-12 14:13:13.070173] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.710 [2024-07-12 14:13:13.070209] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.710 [2024-07-12 14:13:13.070216] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.710 [2024-07-12 14:13:13.070222] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.710 [2024-07-12 14:13:13.070228] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.710 [2024-07-12 14:13:13.070483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.710 [2024-07-12 14:13:13.070500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.710 [2024-07-12 14:13:13.070520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.710 [2024-07-12 14:13:13.070521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.970 [2024-07-12 14:13:13.786338] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.970 [2024-07-12 14:13:13.799670] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:21.970 14:13:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:22.229 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:22.488 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:22.489 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.489 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:22.489 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.748 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:23.008 14:13:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:23.267 14:13:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:23.267 14:13:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:23.267 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.267 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.268 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.268 14:13:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:23.268 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.268 14:13:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:23.268 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.268 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.268 14:13:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:23.268 14:13:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:23.268 14:13:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:23.268 14:13:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:23.268 14:13:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:23.268 14:13:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:23.268 14:13:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:23.527 rmmod nvme_tcp 00:08:23.527 rmmod nvme_fabrics 00:08:23.527 rmmod nvme_keyring 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2405270 ']' 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2405270 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2405270 ']' 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2405270 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2405270 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2405270' 00:08:23.527 killing process with pid 2405270 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2405270 00:08:23.527 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2405270 00:08:23.787 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:23.787 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:23.787 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:23.787 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:23.787 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:23.787 14:13:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.787 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.787 14:13:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.689 14:13:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:25.689 00:08:25.689 real 0m9.779s 00:08:25.689 user 0m12.241s 00:08:25.689 sys 0m4.308s 00:08:25.689 14:13:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.689 14:13:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.689 ************************************ 00:08:25.689 END TEST nvmf_referrals 00:08:25.689 ************************************ 00:08:25.689 14:13:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:25.689 14:13:17 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:25.689 14:13:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:25.689 14:13:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.689 14:13:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:25.948 ************************************ 00:08:25.948 START TEST nvmf_connect_disconnect 00:08:25.948 ************************************ 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:25.948 * Looking for test storage... 00:08:25.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:25.948 14:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:31.237 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:31.237 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:31.238 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:31.238 Found net devices under 0000:86:00.0: cvl_0_0 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:31.238 Found net devices under 0000:86:00.1: cvl_0_1 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:08:31.238 00:08:31.238 --- 10.0.0.2 ping statistics --- 00:08:31.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.238 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:08:31.238 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:08:31.496 00:08:31.496 --- 10.0.0.1 ping statistics --- 00:08:31.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.496 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2409300 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2409300 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2409300 ']' 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.496 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.497 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.497 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.497 14:13:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:31.497 [2024-07-12 14:13:23.337113] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:08:31.497 [2024-07-12 14:13:23.337153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.497 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.497 [2024-07-12 14:13:23.393611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.497 [2024-07-12 14:13:23.474302] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.497 [2024-07-12 14:13:23.474336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.497 [2024-07-12 14:13:23.474343] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.497 [2024-07-12 14:13:23.474350] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.497 [2024-07-12 14:13:23.474355] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.497 [2024-07-12 14:13:23.474397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.497 [2024-07-12 14:13:23.474457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.497 [2024-07-12 14:13:23.474540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.497 [2024-07-12 14:13:23.474541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.432 [2024-07-12 14:13:24.201280] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.432 [2024-07-12 14:13:24.253052] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:32.432 14:13:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:35.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:48.889 rmmod nvme_tcp 00:08:48.889 rmmod nvme_fabrics 00:08:48.889 rmmod nvme_keyring 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2409300 ']' 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2409300 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2409300 ']' 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2409300 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2409300 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2409300' 00:08:48.889 killing process with pid 2409300 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2409300 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2409300 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.889 14:13:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.796 14:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:50.796 00:08:50.796 real 0m25.075s 00:08:50.796 user 1m10.186s 00:08:50.796 sys 0m5.290s 00:08:51.056 14:13:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.056 14:13:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:51.056 ************************************ 00:08:51.056 END TEST nvmf_connect_disconnect 00:08:51.056 ************************************ 00:08:51.056 14:13:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:51.056 14:13:42 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:51.056 14:13:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:51.056 14:13:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.056 14:13:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:51.056 ************************************ 00:08:51.056 START TEST nvmf_multitarget 00:08:51.056 ************************************ 00:08:51.056 14:13:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:51.056 * Looking for test storage... 00:08:51.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.056 14:13:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.056 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:51.056 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.057 14:13:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.057 14:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:51.057 14:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:51.057 14:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:51.057 14:13:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:56.331 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:56.331 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:56.331 Found net devices under 0000:86:00.0: cvl_0_0 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:56.331 Found net devices under 0000:86:00.1: cvl_0_1 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.331 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:56.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:08:56.332 00:08:56.332 --- 10.0.0.2 ping statistics --- 00:08:56.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.332 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:08:56.332 00:08:56.332 --- 10.0.0.1 ping statistics --- 00:08:56.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.332 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:56.332 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:56.591 14:13:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:56.591 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:56.591 14:13:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:56.591 14:13:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:56.591 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:56.591 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2415688 00:08:56.592 14:13:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2415688 00:08:56.592 14:13:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2415688 ']' 00:08:56.592 14:13:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.592 14:13:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:56.592 14:13:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.592 14:13:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:56.592 14:13:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:56.592 [2024-07-12 14:13:48.407612] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:08:56.592 [2024-07-12 14:13:48.407655] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.592 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.592 [2024-07-12 14:13:48.462939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.592 [2024-07-12 14:13:48.543513] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.592 [2024-07-12 14:13:48.543547] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.592 [2024-07-12 14:13:48.543554] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.592 [2024-07-12 14:13:48.543560] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.592 [2024-07-12 14:13:48.543569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.592 [2024-07-12 14:13:48.543805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.592 [2024-07-12 14:13:48.543823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.592 [2024-07-12 14:13:48.543911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.592 [2024-07-12 14:13:48.543913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.571 14:13:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:57.571 14:13:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:57.571 14:13:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:57.571 14:13:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:57.571 14:13:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:57.571 14:13:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.571 14:13:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:57.571 14:13:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:57.571 14:13:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:57.571 14:13:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:57.571 14:13:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:57.571 "nvmf_tgt_1" 00:08:57.571 14:13:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:57.571 "nvmf_tgt_2" 00:08:57.830 14:13:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:57.830 14:13:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:57.830 14:13:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:57.830 14:13:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:57.830 true 00:08:57.830 14:13:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:58.091 true 00:08:58.091 14:13:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:58.091 14:13:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:58.091 14:13:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:58.091 14:13:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:58.091 14:13:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:58.091 14:13:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:58.091 14:13:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:58.091 14:13:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.091 14:13:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:58.091 14:13:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.091 14:13:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.091 rmmod nvme_tcp 00:08:58.091 rmmod nvme_fabrics 00:08:58.091 rmmod nvme_keyring 00:08:58.091 14:13:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.091 14:13:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:58.091 14:13:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:58.091 14:13:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2415688 ']' 00:08:58.091 14:13:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2415688 00:08:58.091 14:13:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2415688 ']' 00:08:58.091 14:13:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2415688 00:08:58.091 14:13:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:58.091 14:13:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:58.091 14:13:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2415688 00:08:58.351 14:13:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:58.351 14:13:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:58.351 14:13:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2415688' 00:08:58.351 killing process with pid 2415688 00:08:58.351 14:13:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2415688 00:08:58.351 14:13:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2415688 00:08:58.351 14:13:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:58.351 14:13:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:58.351 14:13:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:58.351 14:13:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.351 14:13:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:58.351 14:13:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.351 14:13:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.351 14:13:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.886 14:13:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.886 00:09:00.886 real 0m9.482s 00:09:00.886 user 0m9.183s 00:09:00.886 sys 0m4.507s 00:09:00.886 14:13:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.886 14:13:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:00.886 ************************************ 00:09:00.886 END TEST nvmf_multitarget 00:09:00.886 ************************************ 00:09:00.886 14:13:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:00.886 14:13:52 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:00.886 14:13:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:00.886 14:13:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.886 14:13:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:00.886 ************************************ 00:09:00.886 START TEST nvmf_rpc 00:09:00.886 ************************************ 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:00.886 * Looking for test storage... 00:09:00.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.886 14:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:06.159 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:06.159 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:06.159 Found net devices under 0000:86:00.0: cvl_0_0 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:06.159 Found net devices under 0000:86:00.1: cvl_0_1 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:06.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:09:06.159 00:09:06.159 --- 10.0.0.2 ping statistics --- 00:09:06.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.159 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:09:06.159 00:09:06.159 --- 10.0.0.1 ping statistics --- 00:09:06.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.159 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2419479 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2419479 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2419479 ']' 00:09:06.159 14:13:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.160 14:13:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:06.160 14:13:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.160 14:13:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:06.160 14:13:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.160 [2024-07-12 14:13:57.861102] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:09:06.160 [2024-07-12 14:13:57.861143] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.160 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.160 [2024-07-12 14:13:57.918420] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.160 [2024-07-12 14:13:57.996053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.160 [2024-07-12 14:13:57.996089] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.160 [2024-07-12 14:13:57.996096] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.160 [2024-07-12 14:13:57.996106] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.160 [2024-07-12 14:13:57.996111] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.160 [2024-07-12 14:13:57.996159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.160 [2024-07-12 14:13:57.996277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.160 [2024-07-12 14:13:57.996468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.160 [2024-07-12 14:13:57.996470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.728 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.728 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:06.728 14:13:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.728 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:06.728 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.728 14:13:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.728 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:06.728 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.728 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.728 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.728 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:06.728 "tick_rate": 2300000000, 00:09:06.728 "poll_groups": [ 00:09:06.728 { 00:09:06.728 "name": "nvmf_tgt_poll_group_000", 00:09:06.728 "admin_qpairs": 0, 00:09:06.728 "io_qpairs": 0, 00:09:06.728 "current_admin_qpairs": 0, 00:09:06.728 "current_io_qpairs": 0, 00:09:06.728 "pending_bdev_io": 0, 00:09:06.728 "completed_nvme_io": 0, 00:09:06.728 "transports": [] 00:09:06.728 }, 00:09:06.728 { 00:09:06.728 "name": "nvmf_tgt_poll_group_001", 00:09:06.728 "admin_qpairs": 0, 00:09:06.728 "io_qpairs": 0, 00:09:06.728 "current_admin_qpairs": 0, 00:09:06.728 "current_io_qpairs": 0, 00:09:06.728 "pending_bdev_io": 0, 00:09:06.728 "completed_nvme_io": 0, 00:09:06.728 "transports": [] 00:09:06.728 }, 00:09:06.728 { 00:09:06.728 "name": "nvmf_tgt_poll_group_002", 00:09:06.728 "admin_qpairs": 0, 00:09:06.728 "io_qpairs": 0, 00:09:06.728 "current_admin_qpairs": 0, 00:09:06.728 "current_io_qpairs": 0, 00:09:06.728 "pending_bdev_io": 0, 00:09:06.728 "completed_nvme_io": 0, 00:09:06.728 "transports": [] 00:09:06.728 }, 00:09:06.728 { 00:09:06.728 "name": "nvmf_tgt_poll_group_003", 00:09:06.728 "admin_qpairs": 0, 00:09:06.728 "io_qpairs": 0, 00:09:06.728 "current_admin_qpairs": 0, 00:09:06.728 "current_io_qpairs": 0, 00:09:06.728 "pending_bdev_io": 0, 00:09:06.728 "completed_nvme_io": 0, 00:09:06.728 "transports": [] 00:09:06.728 } 00:09:06.728 ] 00:09:06.728 }' 00:09:06.728 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:06.728 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:06.728 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:06.728 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:06.987 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:06.987 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:06.987 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:06.987 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.987 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.987 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.987 [2024-07-12 14:13:58.822746] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.987 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.987 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:06.987 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.987 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.987 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.987 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:06.987 "tick_rate": 2300000000, 00:09:06.987 "poll_groups": [ 00:09:06.987 { 00:09:06.987 "name": "nvmf_tgt_poll_group_000", 00:09:06.987 "admin_qpairs": 0, 00:09:06.987 "io_qpairs": 0, 00:09:06.987 "current_admin_qpairs": 0, 00:09:06.987 "current_io_qpairs": 0, 00:09:06.987 "pending_bdev_io": 0, 00:09:06.987 "completed_nvme_io": 0, 00:09:06.987 "transports": [ 00:09:06.987 { 00:09:06.987 "trtype": "TCP" 00:09:06.987 } 00:09:06.987 ] 00:09:06.987 }, 00:09:06.987 { 00:09:06.987 "name": "nvmf_tgt_poll_group_001", 00:09:06.987 "admin_qpairs": 0, 00:09:06.988 "io_qpairs": 0, 00:09:06.988 "current_admin_qpairs": 0, 00:09:06.988 "current_io_qpairs": 0, 00:09:06.988 "pending_bdev_io": 0, 00:09:06.988 "completed_nvme_io": 0, 00:09:06.988 "transports": [ 00:09:06.988 { 00:09:06.988 "trtype": "TCP" 00:09:06.988 } 00:09:06.988 ] 00:09:06.988 }, 00:09:06.988 { 00:09:06.988 "name": "nvmf_tgt_poll_group_002", 00:09:06.988 "admin_qpairs": 0, 00:09:06.988 "io_qpairs": 0, 00:09:06.988 "current_admin_qpairs": 0, 00:09:06.988 "current_io_qpairs": 0, 00:09:06.988 "pending_bdev_io": 0, 00:09:06.988 "completed_nvme_io": 0, 00:09:06.988 "transports": [ 00:09:06.988 { 00:09:06.988 "trtype": "TCP" 00:09:06.988 } 00:09:06.988 ] 00:09:06.988 }, 00:09:06.988 { 00:09:06.988 "name": "nvmf_tgt_poll_group_003", 00:09:06.988 "admin_qpairs": 0, 00:09:06.988 "io_qpairs": 0, 00:09:06.988 "current_admin_qpairs": 0, 00:09:06.988 "current_io_qpairs": 0, 00:09:06.988 "pending_bdev_io": 0, 00:09:06.988 "completed_nvme_io": 0, 00:09:06.988 "transports": [ 00:09:06.988 { 00:09:06.988 "trtype": "TCP" 00:09:06.988 } 00:09:06.988 ] 00:09:06.988 } 00:09:06.988 ] 00:09:06.988 }' 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.988 Malloc1 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.988 [2024-07-12 14:13:58.990784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:09:06.988 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:07.247 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:09:07.247 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:07.247 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:07.247 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:07.247 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:07.247 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:07.247 14:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:07.247 14:13:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:07.247 14:13:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:07.247 14:13:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:09:07.247 [2024-07-12 14:13:59.019314] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:09:07.247 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:07.247 could not add new controller: failed to write to nvme-fabrics device 00:09:07.247 14:13:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:07.247 14:13:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:07.247 14:13:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:07.247 14:13:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:07.247 14:13:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:07.247 14:13:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.247 14:13:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.247 14:13:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.247 14:13:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:08.623 14:14:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:08.623 14:14:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:08.623 14:14:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:08.623 14:14:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:08.623 14:14:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:10.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:10.527 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:10.528 [2024-07-12 14:14:02.375479] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:09:10.528 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:10.528 could not add new controller: failed to write to nvme-fabrics device 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.528 14:14:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:11.907 14:14:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.907 14:14:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:11.907 14:14:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.907 14:14:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:11.907 14:14:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:13.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.812 [2024-07-12 14:14:05.630412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.812 14:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:15.192 14:14:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:15.192 14:14:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:15.192 14:14:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.192 14:14:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:15.192 14:14:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:17.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.098 [2024-07-12 14:14:08.947829] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.098 14:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:18.475 14:14:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.475 14:14:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:18.475 14:14:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.475 14:14:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:18.475 14:14:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.380 [2024-07-12 14:14:12.329291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.380 14:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.381 14:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.822 14:14:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:21.822 14:14:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:21.822 14:14:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.822 14:14:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:21.822 14:14:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.730 [2024-07-12 14:14:15.599352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.730 14:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.105 14:14:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:25.105 14:14:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:25.105 14:14:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:25.105 14:14:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:25.105 14:14:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:27.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.006 [2024-07-12 14:14:18.868597] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.006 14:14:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:28.383 14:14:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:28.384 14:14:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:28.384 14:14:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:28.384 14:14:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:28.384 14:14:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:30.291 14:14:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:30.291 14:14:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:30.291 14:14:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.291 14:14:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:30.291 14:14:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.291 14:14:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:30.291 14:14:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 [2024-07-12 14:14:22.144520] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 [2024-07-12 14:14:22.192636] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 [2024-07-12 14:14:22.244761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.291 [2024-07-12 14:14:22.292933] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.291 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.551 [2024-07-12 14:14:22.341121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.551 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:30.551 "tick_rate": 2300000000, 00:09:30.551 "poll_groups": [ 00:09:30.551 { 00:09:30.551 "name": "nvmf_tgt_poll_group_000", 00:09:30.551 "admin_qpairs": 2, 00:09:30.551 "io_qpairs": 168, 00:09:30.551 "current_admin_qpairs": 0, 00:09:30.551 "current_io_qpairs": 0, 00:09:30.551 "pending_bdev_io": 0, 00:09:30.551 "completed_nvme_io": 217, 00:09:30.551 "transports": [ 00:09:30.551 { 00:09:30.551 "trtype": "TCP" 00:09:30.551 } 00:09:30.551 ] 00:09:30.551 }, 00:09:30.551 { 00:09:30.551 "name": "nvmf_tgt_poll_group_001", 00:09:30.551 "admin_qpairs": 2, 00:09:30.551 "io_qpairs": 168, 00:09:30.551 "current_admin_qpairs": 0, 00:09:30.551 "current_io_qpairs": 0, 00:09:30.551 "pending_bdev_io": 0, 00:09:30.551 "completed_nvme_io": 270, 00:09:30.551 "transports": [ 00:09:30.551 { 00:09:30.551 "trtype": "TCP" 00:09:30.551 } 00:09:30.551 ] 00:09:30.551 }, 00:09:30.551 { 00:09:30.551 "name": "nvmf_tgt_poll_group_002", 00:09:30.551 "admin_qpairs": 1, 00:09:30.551 "io_qpairs": 168, 00:09:30.551 "current_admin_qpairs": 0, 00:09:30.551 "current_io_qpairs": 0, 00:09:30.551 "pending_bdev_io": 0, 00:09:30.551 "completed_nvme_io": 316, 00:09:30.551 "transports": [ 00:09:30.551 { 00:09:30.551 "trtype": "TCP" 00:09:30.551 } 00:09:30.551 ] 00:09:30.551 }, 00:09:30.551 { 00:09:30.551 "name": "nvmf_tgt_poll_group_003", 00:09:30.551 "admin_qpairs": 2, 00:09:30.551 "io_qpairs": 168, 00:09:30.551 "current_admin_qpairs": 0, 00:09:30.551 "current_io_qpairs": 0, 00:09:30.551 "pending_bdev_io": 0, 00:09:30.551 "completed_nvme_io": 219, 00:09:30.551 "transports": [ 00:09:30.551 { 00:09:30.551 "trtype": "TCP" 00:09:30.551 } 00:09:30.551 ] 00:09:30.552 } 00:09:30.552 ] 00:09:30.552 }' 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:30.552 rmmod nvme_tcp 00:09:30.552 rmmod nvme_fabrics 00:09:30.552 rmmod nvme_keyring 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2419479 ']' 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2419479 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2419479 ']' 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2419479 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:30.552 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2419479 00:09:30.811 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:30.811 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:30.811 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2419479' 00:09:30.811 killing process with pid 2419479 00:09:30.811 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2419479 00:09:30.811 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2419479 00:09:30.811 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:30.811 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:30.811 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:30.811 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:30.811 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:30.811 14:14:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.811 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.811 14:14:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.385 14:14:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.385 00:09:33.385 real 0m32.413s 00:09:33.385 user 1m40.291s 00:09:33.385 sys 0m5.663s 00:09:33.385 14:14:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.385 14:14:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.385 ************************************ 00:09:33.385 END TEST nvmf_rpc 00:09:33.385 ************************************ 00:09:33.385 14:14:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:33.385 14:14:24 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:33.385 14:14:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:33.385 14:14:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.385 14:14:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.385 ************************************ 00:09:33.385 START TEST nvmf_invalid 00:09:33.385 ************************************ 00:09:33.385 14:14:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:33.385 * Looking for test storage... 00:09:33.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.385 14:14:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:38.661 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:38.661 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:38.661 Found net devices under 0000:86:00.0: cvl_0_0 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:38.661 Found net devices under 0000:86:00.1: cvl_0_1 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:38.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:09:38.661 00:09:38.661 --- 10.0.0.2 ping statistics --- 00:09:38.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.661 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:09:38.661 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:09:38.662 00:09:38.662 --- 10.0.0.1 ping statistics --- 00:09:38.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.662 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2427080 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2427080 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2427080 ']' 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.662 14:14:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:38.662 [2024-07-12 14:14:30.338925] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:09:38.662 [2024-07-12 14:14:30.338966] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.662 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.662 [2024-07-12 14:14:30.397150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.662 [2024-07-12 14:14:30.472077] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.662 [2024-07-12 14:14:30.472117] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.662 [2024-07-12 14:14:30.472124] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.662 [2024-07-12 14:14:30.472130] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.662 [2024-07-12 14:14:30.472135] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.662 [2024-07-12 14:14:30.472192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.662 [2024-07-12 14:14:30.472285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.662 [2024-07-12 14:14:30.472368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.662 [2024-07-12 14:14:30.472370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.229 14:14:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.229 14:14:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:39.229 14:14:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:39.229 14:14:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:39.229 14:14:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:39.229 14:14:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.229 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:39.229 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2356 00:09:39.488 [2024-07-12 14:14:31.349822] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:39.488 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:39.488 { 00:09:39.488 "nqn": "nqn.2016-06.io.spdk:cnode2356", 00:09:39.488 "tgt_name": "foobar", 00:09:39.488 "method": "nvmf_create_subsystem", 00:09:39.488 "req_id": 1 00:09:39.488 } 00:09:39.488 Got JSON-RPC error response 00:09:39.488 response: 00:09:39.488 { 00:09:39.488 "code": -32603, 00:09:39.488 "message": "Unable to find target foobar" 00:09:39.488 }' 00:09:39.488 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:39.488 { 00:09:39.488 "nqn": "nqn.2016-06.io.spdk:cnode2356", 00:09:39.488 "tgt_name": "foobar", 00:09:39.488 "method": "nvmf_create_subsystem", 00:09:39.488 "req_id": 1 00:09:39.488 } 00:09:39.488 Got JSON-RPC error response 00:09:39.488 response: 00:09:39.488 { 00:09:39.488 "code": -32603, 00:09:39.488 "message": "Unable to find target foobar" 00:09:39.488 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:39.488 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:39.488 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25587 00:09:39.746 [2024-07-12 14:14:31.538477] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25587: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:39.746 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:39.746 { 00:09:39.746 "nqn": "nqn.2016-06.io.spdk:cnode25587", 00:09:39.746 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:39.746 "method": "nvmf_create_subsystem", 00:09:39.746 "req_id": 1 00:09:39.746 } 00:09:39.746 Got JSON-RPC error response 00:09:39.746 response: 00:09:39.746 { 00:09:39.746 "code": -32602, 00:09:39.746 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:39.746 }' 00:09:39.746 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:39.746 { 00:09:39.746 "nqn": "nqn.2016-06.io.spdk:cnode25587", 00:09:39.746 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:39.746 "method": "nvmf_create_subsystem", 00:09:39.746 "req_id": 1 00:09:39.746 } 00:09:39.746 Got JSON-RPC error response 00:09:39.746 response: 00:09:39.746 { 00:09:39.746 "code": -32602, 00:09:39.746 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:39.746 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:39.746 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:39.746 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14017 00:09:39.746 [2024-07-12 14:14:31.731080] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14017: invalid model number 'SPDK_Controller' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:40.005 { 00:09:40.005 "nqn": "nqn.2016-06.io.spdk:cnode14017", 00:09:40.005 "model_number": "SPDK_Controller\u001f", 00:09:40.005 "method": "nvmf_create_subsystem", 00:09:40.005 "req_id": 1 00:09:40.005 } 00:09:40.005 Got JSON-RPC error response 00:09:40.005 response: 00:09:40.005 { 00:09:40.005 "code": -32602, 00:09:40.005 "message": "Invalid MN SPDK_Controller\u001f" 00:09:40.005 }' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:40.005 { 00:09:40.005 "nqn": "nqn.2016-06.io.spdk:cnode14017", 00:09:40.005 "model_number": "SPDK_Controller\u001f", 00:09:40.005 "method": "nvmf_create_subsystem", 00:09:40.005 "req_id": 1 00:09:40.005 } 00:09:40.005 Got JSON-RPC error response 00:09:40.005 response: 00:09:40.005 { 00:09:40.005 "code": -32602, 00:09:40.005 "message": "Invalid MN SPDK_Controller\u001f" 00:09:40.005 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:40.005 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ r == \- ]] 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'rG=eA^M9Ow#\u.gu(8E,c' 00:09:40.006 14:14:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'rG=eA^M9Ow#\u.gu(8E,c' nqn.2016-06.io.spdk:cnode321 00:09:40.265 [2024-07-12 14:14:32.060228] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode321: invalid serial number 'rG=eA^M9Ow#\u.gu(8E,c' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:40.265 { 00:09:40.265 "nqn": "nqn.2016-06.io.spdk:cnode321", 00:09:40.265 "serial_number": "rG=eA^M9Ow#\\u.gu(8E,c", 00:09:40.265 "method": "nvmf_create_subsystem", 00:09:40.265 "req_id": 1 00:09:40.265 } 00:09:40.265 Got JSON-RPC error response 00:09:40.265 response: 00:09:40.265 { 00:09:40.265 "code": -32602, 00:09:40.265 "message": "Invalid SN rG=eA^M9Ow#\\u.gu(8E,c" 00:09:40.265 }' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:40.265 { 00:09:40.265 "nqn": "nqn.2016-06.io.spdk:cnode321", 00:09:40.265 "serial_number": "rG=eA^M9Ow#\\u.gu(8E,c", 00:09:40.265 "method": "nvmf_create_subsystem", 00:09:40.265 "req_id": 1 00:09:40.265 } 00:09:40.265 Got JSON-RPC error response 00:09:40.265 response: 00:09:40.265 { 00:09:40.265 "code": -32602, 00:09:40.265 "message": "Invalid SN rG=eA^M9Ow#\\u.gu(8E,c" 00:09:40.265 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:40.265 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:40.266 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:40.524 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:40.524 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.524 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ! == \- ]] 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '!{]cGZ,xA'\''Ctv`3r9ajVNO$0Z&p!.)cSRI35AK 4J' 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '!{]cGZ,xA'\''Ctv`3r9ajVNO$0Z&p!.)cSRI35AK 4J' nqn.2016-06.io.spdk:cnode12467 00:09:40.525 [2024-07-12 14:14:32.501716] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12467: invalid model number '!{]cGZ,xA'Ctv`3r9ajVNO$0Z&p!.)cSRI35AK 4J' 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:40.525 { 00:09:40.525 "nqn": "nqn.2016-06.io.spdk:cnode12467", 00:09:40.525 "model_number": "!{]cGZ,xA'\''Ctv`3r9ajVNO$0Z&p!.)cSRI35AK 4J", 00:09:40.525 "method": "nvmf_create_subsystem", 00:09:40.525 "req_id": 1 00:09:40.525 } 00:09:40.525 Got JSON-RPC error response 00:09:40.525 response: 00:09:40.525 { 00:09:40.525 "code": -32602, 00:09:40.525 "message": "Invalid MN !{]cGZ,xA'\''Ctv`3r9ajVNO$0Z&p!.)cSRI35AK 4J" 00:09:40.525 }' 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:40.525 { 00:09:40.525 "nqn": "nqn.2016-06.io.spdk:cnode12467", 00:09:40.525 "model_number": "!{]cGZ,xA'Ctv`3r9ajVNO$0Z&p!.)cSRI35AK 4J", 00:09:40.525 "method": "nvmf_create_subsystem", 00:09:40.525 "req_id": 1 00:09:40.525 } 00:09:40.525 Got JSON-RPC error response 00:09:40.525 response: 00:09:40.525 { 00:09:40.525 "code": -32602, 00:09:40.525 "message": "Invalid MN !{]cGZ,xA'Ctv`3r9ajVNO$0Z&p!.)cSRI35AK 4J" 00:09:40.525 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:40.525 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:40.783 [2024-07-12 14:14:32.686408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.783 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:41.042 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:41.042 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:41.042 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:41.042 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:41.042 14:14:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:41.301 [2024-07-12 14:14:33.061013] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:41.301 14:14:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:41.301 { 00:09:41.301 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:41.301 "listen_address": { 00:09:41.301 "trtype": "tcp", 00:09:41.301 "traddr": "", 00:09:41.301 "trsvcid": "4421" 00:09:41.301 }, 00:09:41.301 "method": "nvmf_subsystem_remove_listener", 00:09:41.301 "req_id": 1 00:09:41.301 } 00:09:41.301 Got JSON-RPC error response 00:09:41.301 response: 00:09:41.301 { 00:09:41.301 "code": -32602, 00:09:41.301 "message": "Invalid parameters" 00:09:41.301 }' 00:09:41.301 14:14:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:41.301 { 00:09:41.301 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:41.301 "listen_address": { 00:09:41.301 "trtype": "tcp", 00:09:41.301 "traddr": "", 00:09:41.301 "trsvcid": "4421" 00:09:41.301 }, 00:09:41.301 "method": "nvmf_subsystem_remove_listener", 00:09:41.301 "req_id": 1 00:09:41.301 } 00:09:41.301 Got JSON-RPC error response 00:09:41.301 response: 00:09:41.301 { 00:09:41.301 "code": -32602, 00:09:41.301 "message": "Invalid parameters" 00:09:41.301 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:41.301 14:14:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25564 -i 0 00:09:41.301 [2024-07-12 14:14:33.249602] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25564: invalid cntlid range [0-65519] 00:09:41.301 14:14:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:41.301 { 00:09:41.301 "nqn": "nqn.2016-06.io.spdk:cnode25564", 00:09:41.301 "min_cntlid": 0, 00:09:41.301 "method": "nvmf_create_subsystem", 00:09:41.301 "req_id": 1 00:09:41.301 } 00:09:41.301 Got JSON-RPC error response 00:09:41.301 response: 00:09:41.301 { 00:09:41.301 "code": -32602, 00:09:41.301 "message": "Invalid cntlid range [0-65519]" 00:09:41.301 }' 00:09:41.301 14:14:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:41.301 { 00:09:41.301 "nqn": "nqn.2016-06.io.spdk:cnode25564", 00:09:41.301 "min_cntlid": 0, 00:09:41.301 "method": "nvmf_create_subsystem", 00:09:41.301 "req_id": 1 00:09:41.301 } 00:09:41.301 Got JSON-RPC error response 00:09:41.301 response: 00:09:41.301 { 00:09:41.301 "code": -32602, 00:09:41.301 "message": "Invalid cntlid range [0-65519]" 00:09:41.301 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:41.301 14:14:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32008 -i 65520 00:09:41.558 [2024-07-12 14:14:33.446258] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32008: invalid cntlid range [65520-65519] 00:09:41.559 14:14:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:41.559 { 00:09:41.559 "nqn": "nqn.2016-06.io.spdk:cnode32008", 00:09:41.559 "min_cntlid": 65520, 00:09:41.559 "method": "nvmf_create_subsystem", 00:09:41.559 "req_id": 1 00:09:41.559 } 00:09:41.559 Got JSON-RPC error response 00:09:41.559 response: 00:09:41.559 { 00:09:41.559 "code": -32602, 00:09:41.559 "message": "Invalid cntlid range [65520-65519]" 00:09:41.559 }' 00:09:41.559 14:14:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:41.559 { 00:09:41.559 "nqn": "nqn.2016-06.io.spdk:cnode32008", 00:09:41.559 "min_cntlid": 65520, 00:09:41.559 "method": "nvmf_create_subsystem", 00:09:41.559 "req_id": 1 00:09:41.559 } 00:09:41.559 Got JSON-RPC error response 00:09:41.559 response: 00:09:41.559 { 00:09:41.559 "code": -32602, 00:09:41.559 "message": "Invalid cntlid range [65520-65519]" 00:09:41.559 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:41.559 14:14:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25331 -I 0 00:09:41.817 [2024-07-12 14:14:33.634925] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25331: invalid cntlid range [1-0] 00:09:41.817 14:14:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:41.817 { 00:09:41.817 "nqn": "nqn.2016-06.io.spdk:cnode25331", 00:09:41.817 "max_cntlid": 0, 00:09:41.817 "method": "nvmf_create_subsystem", 00:09:41.817 "req_id": 1 00:09:41.817 } 00:09:41.817 Got JSON-RPC error response 00:09:41.817 response: 00:09:41.817 { 00:09:41.817 "code": -32602, 00:09:41.817 "message": "Invalid cntlid range [1-0]" 00:09:41.817 }' 00:09:41.817 14:14:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:41.817 { 00:09:41.817 "nqn": "nqn.2016-06.io.spdk:cnode25331", 00:09:41.817 "max_cntlid": 0, 00:09:41.817 "method": "nvmf_create_subsystem", 00:09:41.817 "req_id": 1 00:09:41.817 } 00:09:41.817 Got JSON-RPC error response 00:09:41.817 response: 00:09:41.817 { 00:09:41.817 "code": -32602, 00:09:41.818 "message": "Invalid cntlid range [1-0]" 00:09:41.818 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:41.818 14:14:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1933 -I 65520 00:09:41.818 [2024-07-12 14:14:33.811462] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1933: invalid cntlid range [1-65520] 00:09:42.076 14:14:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:42.076 { 00:09:42.076 "nqn": "nqn.2016-06.io.spdk:cnode1933", 00:09:42.076 "max_cntlid": 65520, 00:09:42.076 "method": "nvmf_create_subsystem", 00:09:42.076 "req_id": 1 00:09:42.076 } 00:09:42.076 Got JSON-RPC error response 00:09:42.076 response: 00:09:42.076 { 00:09:42.076 "code": -32602, 00:09:42.076 "message": "Invalid cntlid range [1-65520]" 00:09:42.076 }' 00:09:42.076 14:14:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:42.076 { 00:09:42.076 "nqn": "nqn.2016-06.io.spdk:cnode1933", 00:09:42.076 "max_cntlid": 65520, 00:09:42.076 "method": "nvmf_create_subsystem", 00:09:42.076 "req_id": 1 00:09:42.076 } 00:09:42.076 Got JSON-RPC error response 00:09:42.076 response: 00:09:42.076 { 00:09:42.076 "code": -32602, 00:09:42.076 "message": "Invalid cntlid range [1-65520]" 00:09:42.076 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:42.076 14:14:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20291 -i 6 -I 5 00:09:42.076 [2024-07-12 14:14:33.988068] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20291: invalid cntlid range [6-5] 00:09:42.076 14:14:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:42.076 { 00:09:42.076 "nqn": "nqn.2016-06.io.spdk:cnode20291", 00:09:42.076 "min_cntlid": 6, 00:09:42.076 "max_cntlid": 5, 00:09:42.076 "method": "nvmf_create_subsystem", 00:09:42.076 "req_id": 1 00:09:42.076 } 00:09:42.076 Got JSON-RPC error response 00:09:42.076 response: 00:09:42.076 { 00:09:42.076 "code": -32602, 00:09:42.076 "message": "Invalid cntlid range [6-5]" 00:09:42.076 }' 00:09:42.076 14:14:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:42.076 { 00:09:42.076 "nqn": "nqn.2016-06.io.spdk:cnode20291", 00:09:42.076 "min_cntlid": 6, 00:09:42.076 "max_cntlid": 5, 00:09:42.076 "method": "nvmf_create_subsystem", 00:09:42.076 "req_id": 1 00:09:42.076 } 00:09:42.076 Got JSON-RPC error response 00:09:42.076 response: 00:09:42.076 { 00:09:42.076 "code": -32602, 00:09:42.076 "message": "Invalid cntlid range [6-5]" 00:09:42.076 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:42.076 14:14:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:42.335 { 00:09:42.335 "name": "foobar", 00:09:42.335 "method": "nvmf_delete_target", 00:09:42.335 "req_id": 1 00:09:42.335 } 00:09:42.335 Got JSON-RPC error response 00:09:42.335 response: 00:09:42.335 { 00:09:42.335 "code": -32602, 00:09:42.335 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:42.335 }' 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:42.335 { 00:09:42.335 "name": "foobar", 00:09:42.335 "method": "nvmf_delete_target", 00:09:42.335 "req_id": 1 00:09:42.335 } 00:09:42.335 Got JSON-RPC error response 00:09:42.335 response: 00:09:42.335 { 00:09:42.335 "code": -32602, 00:09:42.335 "message": "The specified target doesn't exist, cannot delete it." 00:09:42.335 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:42.335 rmmod nvme_tcp 00:09:42.335 rmmod nvme_fabrics 00:09:42.335 rmmod nvme_keyring 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2427080 ']' 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2427080 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2427080 ']' 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2427080 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2427080 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2427080' 00:09:42.335 killing process with pid 2427080 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2427080 00:09:42.335 14:14:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2427080 00:09:42.594 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:42.594 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:42.594 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:42.594 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:42.594 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:42.594 14:14:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.594 14:14:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:42.594 14:14:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.497 14:14:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:44.497 00:09:44.497 real 0m11.565s 00:09:44.497 user 0m19.412s 00:09:44.497 sys 0m4.946s 00:09:44.497 14:14:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.497 14:14:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:44.497 ************************************ 00:09:44.497 END TEST nvmf_invalid 00:09:44.497 ************************************ 00:09:44.758 14:14:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:44.758 14:14:36 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:44.758 14:14:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:44.758 14:14:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.758 14:14:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:44.758 ************************************ 00:09:44.758 START TEST nvmf_abort 00:09:44.758 ************************************ 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:44.758 * Looking for test storage... 00:09:44.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:44.758 14:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:50.082 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.082 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:50.083 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:50.083 Found net devices under 0000:86:00.0: cvl_0_0 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:50.083 Found net devices under 0000:86:00.1: cvl_0_1 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:50.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:09:50.083 00:09:50.083 --- 10.0.0.2 ping statistics --- 00:09:50.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.083 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:09:50.083 00:09:50.083 --- 10.0.0.1 ping statistics --- 00:09:50.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.083 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2431250 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2431250 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2431250 ']' 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.083 14:14:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.083 [2024-07-12 14:14:41.557624] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:09:50.083 [2024-07-12 14:14:41.557665] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.083 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.083 [2024-07-12 14:14:41.615138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:50.083 [2024-07-12 14:14:41.694718] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.083 [2024-07-12 14:14:41.694755] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.083 [2024-07-12 14:14:41.694762] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.083 [2024-07-12 14:14:41.694769] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.083 [2024-07-12 14:14:41.694774] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.083 [2024-07-12 14:14:41.694809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.083 [2024-07-12 14:14:41.694894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.083 [2024-07-12 14:14:41.694896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.650 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.650 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:50.650 14:14:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:50.650 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.651 [2024-07-12 14:14:42.411472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.651 Malloc0 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.651 Delay0 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.651 [2024-07-12 14:14:42.491142] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.651 14:14:42 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:50.651 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.651 [2024-07-12 14:14:42.643542] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:53.189 Initializing NVMe Controllers 00:09:53.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:53.189 controller IO queue size 128 less than required 00:09:53.189 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:53.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:53.189 Initialization complete. Launching workers. 00:09:53.189 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41542 00:09:53.189 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41603, failed to submit 62 00:09:53.189 success 41546, unsuccess 57, failed 0 00:09:53.189 14:14:44 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:53.189 14:14:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.189 14:14:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.189 14:14:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.189 14:14:44 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:53.189 14:14:44 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:53.189 14:14:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:53.189 14:14:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:53.190 rmmod nvme_tcp 00:09:53.190 rmmod nvme_fabrics 00:09:53.190 rmmod nvme_keyring 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2431250 ']' 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2431250 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2431250 ']' 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2431250 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2431250 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2431250' 00:09:53.190 killing process with pid 2431250 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2431250 00:09:53.190 14:14:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2431250 00:09:53.190 14:14:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:53.190 14:14:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:53.190 14:14:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:53.190 14:14:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:53.190 14:14:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:53.190 14:14:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.190 14:14:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.190 14:14:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.724 14:14:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:55.724 00:09:55.724 real 0m10.677s 00:09:55.724 user 0m13.486s 00:09:55.724 sys 0m4.622s 00:09:55.724 14:14:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:55.724 14:14:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:55.724 ************************************ 00:09:55.724 END TEST nvmf_abort 00:09:55.724 ************************************ 00:09:55.724 14:14:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:55.724 14:14:47 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:55.724 14:14:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:55.724 14:14:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.724 14:14:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:55.724 ************************************ 00:09:55.724 START TEST nvmf_ns_hotplug_stress 00:09:55.724 ************************************ 00:09:55.724 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:55.725 * Looking for test storage... 00:09:55.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:55.725 14:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:00.998 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:00.998 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:00.998 Found net devices under 0000:86:00.0: cvl_0_0 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:00.998 Found net devices under 0000:86:00.1: cvl_0_1 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:00.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:10:00.998 00:10:00.998 --- 10.0.0.2 ping statistics --- 00:10:00.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.998 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:10:00.998 00:10:00.998 --- 10.0.0.1 ping statistics --- 00:10:00.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.998 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:00.998 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:00.999 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.999 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:00.999 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:00.999 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2435245 00:10:00.999 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2435245 00:10:00.999 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:00.999 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2435245 ']' 00:10:00.999 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.999 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.999 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.999 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.999 14:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:00.999 [2024-07-12 14:14:52.740531] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:10:00.999 [2024-07-12 14:14:52.740577] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.999 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.999 [2024-07-12 14:14:52.800211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.999 [2024-07-12 14:14:52.879635] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.999 [2024-07-12 14:14:52.879671] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.999 [2024-07-12 14:14:52.879678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.999 [2024-07-12 14:14:52.879684] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.999 [2024-07-12 14:14:52.879689] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.999 [2024-07-12 14:14:52.879783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.999 [2024-07-12 14:14:52.879866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.999 [2024-07-12 14:14:52.879868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.565 14:14:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.565 14:14:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:01.565 14:14:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:01.565 14:14:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:01.565 14:14:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:01.824 14:14:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.824 14:14:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:01.824 14:14:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:01.824 [2024-07-12 14:14:53.736588] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.824 14:14:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:02.082 14:14:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.341 [2024-07-12 14:14:54.101917] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.341 14:14:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:02.341 14:14:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:02.600 Malloc0 00:10:02.600 14:14:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:02.858 Delay0 00:10:02.858 14:14:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.118 14:14:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:03.118 NULL1 00:10:03.118 14:14:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:03.377 14:14:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:03.377 14:14:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2435731 00:10:03.377 14:14:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:03.377 14:14:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.377 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.637 14:14:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.637 14:14:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:03.637 14:14:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:03.896 true 00:10:03.896 14:14:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:03.896 14:14:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.155 14:14:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.414 14:14:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:04.414 14:14:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:04.414 true 00:10:04.414 14:14:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:04.414 14:14:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.673 14:14:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.932 14:14:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:04.932 14:14:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:04.932 true 00:10:05.191 14:14:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:05.191 14:14:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.191 14:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.450 14:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:05.450 14:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:05.709 true 00:10:05.709 14:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:05.709 14:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.968 14:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.968 14:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:05.968 14:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:06.227 true 00:10:06.227 14:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:06.227 14:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.486 14:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.745 14:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:06.745 14:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:06.745 true 00:10:06.745 14:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:06.745 14:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.004 14:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.263 14:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:07.263 14:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:07.263 true 00:10:07.263 14:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:07.263 14:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.522 14:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.781 14:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:07.781 14:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:08.039 true 00:10:08.039 14:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:08.039 14:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.040 14:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.298 14:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:08.298 14:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:08.557 true 00:10:08.557 14:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:08.557 14:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.816 14:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.816 14:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:08.816 14:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:09.106 true 00:10:09.106 14:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:09.106 14:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.364 14:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.622 14:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:09.622 14:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:09.622 true 00:10:09.622 14:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:09.622 14:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.880 14:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.140 14:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:10.140 14:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:10.398 true 00:10:10.398 14:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:10.398 14:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.398 14:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.657 14:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:10.657 14:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:10.915 true 00:10:10.915 14:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:10.915 14:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.174 14:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.174 14:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:11.174 14:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:11.432 true 00:10:11.432 14:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:11.432 14:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.691 14:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.951 14:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:11.951 14:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:11.951 true 00:10:12.209 14:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:12.209 14:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.209 14:15:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.468 14:15:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:12.468 14:15:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:12.727 true 00:10:12.727 14:15:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:12.727 14:15:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.986 14:15:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.986 14:15:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:12.986 14:15:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:13.245 true 00:10:13.245 14:15:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:13.245 14:15:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.504 14:15:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.763 14:15:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:13.763 14:15:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:13.763 true 00:10:13.763 14:15:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:13.763 14:15:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.021 14:15:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.280 14:15:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:14.280 14:15:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:14.539 true 00:10:14.539 14:15:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:14.539 14:15:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.539 14:15:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.798 14:15:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:14.798 14:15:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:15.057 true 00:10:15.057 14:15:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:15.057 14:15:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.315 14:15:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.573 14:15:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:15.573 14:15:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:15.573 true 00:10:15.573 14:15:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:15.573 14:15:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.830 14:15:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.087 14:15:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:16.087 14:15:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:16.345 true 00:10:16.345 14:15:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:16.345 14:15:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.345 14:15:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.603 14:15:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:16.603 14:15:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:16.861 true 00:10:16.861 14:15:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:16.861 14:15:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.120 14:15:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.379 14:15:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:17.379 14:15:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:17.379 true 00:10:17.379 14:15:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:17.379 14:15:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.637 14:15:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.896 14:15:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:17.896 14:15:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:17.896 true 00:10:17.896 14:15:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:17.896 14:15:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.155 14:15:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.414 14:15:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:18.414 14:15:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:18.673 true 00:10:18.673 14:15:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:18.673 14:15:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.673 14:15:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.932 14:15:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:18.932 14:15:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:19.190 true 00:10:19.190 14:15:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:19.190 14:15:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.449 14:15:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.449 14:15:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:19.449 14:15:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:19.708 true 00:10:19.708 14:15:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:19.708 14:15:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.968 14:15:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.228 14:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:20.228 14:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:20.228 true 00:10:20.228 14:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:20.228 14:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.487 14:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.746 14:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:20.746 14:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:21.004 true 00:10:21.004 14:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:21.004 14:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.262 14:15:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.262 14:15:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:21.262 14:15:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:21.519 true 00:10:21.519 14:15:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:21.519 14:15:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.778 14:15:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.037 14:15:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:22.037 14:15:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:22.037 true 00:10:22.037 14:15:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:22.037 14:15:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.296 14:15:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.554 14:15:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:22.554 14:15:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:22.813 true 00:10:22.813 14:15:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:22.813 14:15:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.813 14:15:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.072 14:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:23.072 14:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:23.330 true 00:10:23.330 14:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:23.330 14:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.589 14:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.847 14:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:23.847 14:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:23.847 true 00:10:23.847 14:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:23.847 14:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.106 14:15:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.364 14:15:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:24.364 14:15:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:24.622 true 00:10:24.622 14:15:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:24.622 14:15:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.882 14:15:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.882 14:15:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:24.882 14:15:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:25.170 true 00:10:25.170 14:15:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:25.170 14:15:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.478 14:15:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.478 14:15:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:25.478 14:15:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:25.737 true 00:10:25.737 14:15:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:25.737 14:15:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.995 14:15:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.254 14:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:26.254 14:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:26.254 true 00:10:26.254 14:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:26.254 14:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.512 14:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.771 14:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:26.771 14:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:27.030 true 00:10:27.030 14:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:27.030 14:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.289 14:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.289 14:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:27.289 14:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:27.548 true 00:10:27.548 14:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:27.548 14:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.808 14:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.067 14:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:28.067 14:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:28.067 true 00:10:28.067 14:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:28.067 14:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.326 14:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.584 14:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:28.584 14:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:28.843 true 00:10:28.843 14:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:28.843 14:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.843 14:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.102 14:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:29.102 14:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:29.361 true 00:10:29.361 14:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:29.361 14:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.620 14:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.879 14:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:29.879 14:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:29.879 true 00:10:29.879 14:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:29.879 14:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.137 14:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.396 14:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:30.396 14:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:30.654 true 00:10:30.654 14:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:30.654 14:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.654 14:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.912 14:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:30.912 14:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:31.173 true 00:10:31.173 14:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:31.173 14:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.433 14:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.433 14:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:31.433 14:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:31.691 true 00:10:31.691 14:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:31.691 14:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.950 14:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.209 14:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:32.209 14:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:32.209 true 00:10:32.468 14:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:32.468 14:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.468 14:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.727 14:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:32.727 14:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:32.985 true 00:10:32.985 14:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:32.985 14:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.244 14:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.244 14:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:33.244 14:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:33.503 true 00:10:33.503 14:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:33.503 14:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.761 Initializing NVMe Controllers 00:10:33.761 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:33.761 Controller IO queue size 128, less than required. 00:10:33.761 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:33.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:33.761 Initialization complete. Launching workers. 00:10:33.761 ======================================================== 00:10:33.761 Latency(us) 00:10:33.761 Device Information : IOPS MiB/s Average min max 00:10:33.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27009.87 13.19 4738.98 2815.69 8676.37 00:10:33.761 ======================================================== 00:10:33.761 Total : 27009.87 13.19 4738.98 2815.69 8676.37 00:10:33.761 00:10:33.761 14:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.018 14:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:34.018 14:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:34.018 true 00:10:34.018 14:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2435731 00:10:34.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2435731) - No such process 00:10:34.018 14:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2435731 00:10:34.018 14:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.275 14:15:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:34.532 14:15:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:34.532 14:15:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:34.532 14:15:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:34.532 14:15:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:34.532 14:15:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:34.532 null0 00:10:34.789 14:15:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:34.789 14:15:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:34.789 14:15:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:34.789 null1 00:10:34.789 14:15:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:34.789 14:15:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:34.789 14:15:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:35.047 null2 00:10:35.047 14:15:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:35.047 14:15:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:35.047 14:15:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:35.305 null3 00:10:35.305 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:35.305 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:35.305 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:35.305 null4 00:10:35.305 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:35.305 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:35.305 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:35.564 null5 00:10:35.564 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:35.564 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:35.564 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:35.823 null6 00:10:35.823 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:35.823 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:35.823 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:35.823 null7 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2441898 2441899 2441902 2441903 2441906 2441907 2441910 2441911 00:10:36.082 14:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:36.082 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:36.082 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:36.082 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:36.082 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:36.082 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.082 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:36.082 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.082 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.342 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:36.601 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.860 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:37.119 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.119 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.119 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.119 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.119 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.119 14:15:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.119 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.379 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:37.638 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.897 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:38.156 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:38.156 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.156 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:38.156 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:38.156 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.156 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.156 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.156 14:15:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.156 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:38.415 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.415 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.415 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:38.415 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:38.415 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.415 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:38.415 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.415 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:38.415 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.415 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.415 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:38.674 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:38.934 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.935 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.935 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:38.935 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.935 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.935 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.935 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:38.935 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.935 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:38.935 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.935 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.935 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:38.935 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.935 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.935 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:39.193 14:15:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:39.193 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.193 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:39.193 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:39.194 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:39.194 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:39.194 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:39.194 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:39.194 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.194 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.194 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:39.451 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:39.710 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:39.710 rmmod nvme_tcp 00:10:39.969 rmmod nvme_fabrics 00:10:39.969 rmmod nvme_keyring 00:10:39.969 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:39.969 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:39.969 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:39.969 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2435245 ']' 00:10:39.969 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2435245 00:10:39.969 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2435245 ']' 00:10:39.969 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2435245 00:10:39.969 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:39.969 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:39.969 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2435245 00:10:39.969 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:39.969 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:39.969 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2435245' 00:10:39.969 killing process with pid 2435245 00:10:39.969 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2435245 00:10:39.969 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2435245 00:10:40.228 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:40.228 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:40.228 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:40.228 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:40.228 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:40.228 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.228 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.228 14:15:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.189 14:15:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:42.189 00:10:42.189 real 0m46.760s 00:10:42.189 user 3m18.774s 00:10:42.189 sys 0m16.548s 00:10:42.189 14:15:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.189 14:15:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.189 ************************************ 00:10:42.189 END TEST nvmf_ns_hotplug_stress 00:10:42.189 ************************************ 00:10:42.189 14:15:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:42.189 14:15:34 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:42.189 14:15:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:42.189 14:15:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.189 14:15:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:42.189 ************************************ 00:10:42.189 START TEST nvmf_connect_stress 00:10:42.189 ************************************ 00:10:42.189 14:15:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:42.189 * Looking for test storage... 00:10:42.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:42.449 14:15:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:47.724 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:47.724 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.724 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:47.725 Found net devices under 0000:86:00.0: cvl_0_0 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:47.725 Found net devices under 0000:86:00.1: cvl_0_1 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:47.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:10:47.725 00:10:47.725 --- 10.0.0.2 ping statistics --- 00:10:47.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.725 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:10:47.725 00:10:47.725 --- 10.0.0.1 ping statistics --- 00:10:47.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.725 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2446051 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2446051 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2446051 ']' 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:47.725 14:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.725 [2024-07-12 14:15:39.381766] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:10:47.725 [2024-07-12 14:15:39.381813] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.725 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.725 [2024-07-12 14:15:39.436567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:47.725 [2024-07-12 14:15:39.516940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.725 [2024-07-12 14:15:39.516977] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.725 [2024-07-12 14:15:39.516984] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.725 [2024-07-12 14:15:39.516990] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.725 [2024-07-12 14:15:39.516996] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.725 [2024-07-12 14:15:39.517096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.725 [2024-07-12 14:15:39.517169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.725 [2024-07-12 14:15:39.517170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.292 [2024-07-12 14:15:40.238044] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.292 [2024-07-12 14:15:40.269497] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.292 NULL1 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2446298 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.292 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.550 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.550 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.550 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.550 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.550 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.551 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.551 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.809 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.809 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:48.809 14:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.809 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.809 14:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.068 14:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.068 14:15:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:49.068 14:15:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.068 14:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.068 14:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.634 14:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.634 14:15:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:49.634 14:15:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.634 14:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.634 14:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.892 14:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.892 14:15:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:49.892 14:15:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.892 14:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.892 14:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.150 14:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.150 14:15:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:50.150 14:15:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.150 14:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.151 14:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.409 14:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.409 14:15:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:50.409 14:15:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.409 14:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.409 14:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.667 14:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.667 14:15:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:50.667 14:15:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.667 14:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.667 14:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.232 14:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.232 14:15:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:51.232 14:15:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.232 14:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.232 14:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.490 14:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.490 14:15:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:51.490 14:15:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.490 14:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.490 14:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.748 14:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.748 14:15:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:51.748 14:15:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.748 14:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.748 14:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.007 14:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.007 14:15:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:52.007 14:15:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.007 14:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.007 14:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.265 14:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.265 14:15:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:52.265 14:15:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.265 14:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.265 14:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.833 14:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.833 14:15:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:52.833 14:15:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.833 14:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.833 14:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.091 14:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.091 14:15:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:53.091 14:15:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.091 14:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.091 14:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.350 14:15:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.350 14:15:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:53.350 14:15:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.350 14:15:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.350 14:15:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.608 14:15:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.608 14:15:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:53.608 14:15:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.608 14:15:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.608 14:15:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.176 14:15:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.176 14:15:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:54.176 14:15:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.176 14:15:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.176 14:15:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.434 14:15:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.434 14:15:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:54.434 14:15:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.434 14:15:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.434 14:15:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.693 14:15:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.693 14:15:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:54.693 14:15:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.693 14:15:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.693 14:15:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.951 14:15:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.951 14:15:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:54.951 14:15:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.951 14:15:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.951 14:15:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.209 14:15:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.210 14:15:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:55.210 14:15:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.210 14:15:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.210 14:15:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.777 14:15:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.777 14:15:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:55.777 14:15:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.777 14:15:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.777 14:15:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.035 14:15:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.035 14:15:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:56.035 14:15:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.035 14:15:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.035 14:15:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.292 14:15:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.292 14:15:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:56.292 14:15:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.292 14:15:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.292 14:15:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.548 14:15:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.548 14:15:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:56.548 14:15:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.548 14:15:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.548 14:15:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.116 14:15:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.116 14:15:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:57.116 14:15:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.116 14:15:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.116 14:15:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.374 14:15:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.374 14:15:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:57.374 14:15:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.374 14:15:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.374 14:15:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.632 14:15:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.632 14:15:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:57.632 14:15:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.633 14:15:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.633 14:15:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.891 14:15:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.891 14:15:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:57.891 14:15:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.891 14:15:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.891 14:15:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.149 14:15:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.149 14:15:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:58.149 14:15:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.149 14:15:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.149 14:15:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.408 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2446298 00:10:58.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2446298) - No such process 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2446298 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:58.667 rmmod nvme_tcp 00:10:58.667 rmmod nvme_fabrics 00:10:58.667 rmmod nvme_keyring 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2446051 ']' 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2446051 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2446051 ']' 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2446051 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:58.667 14:15:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:58.668 14:15:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2446051 00:10:58.668 14:15:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:58.668 14:15:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:58.668 14:15:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2446051' 00:10:58.668 killing process with pid 2446051 00:10:58.668 14:15:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2446051 00:10:58.668 14:15:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2446051 00:10:58.926 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:58.926 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:58.926 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:58.926 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:58.926 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:58.926 14:15:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.926 14:15:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.926 14:15:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.833 14:15:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:00.833 00:11:00.833 real 0m18.713s 00:11:00.833 user 0m41.418s 00:11:00.833 sys 0m7.590s 00:11:00.833 14:15:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:00.833 14:15:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.833 ************************************ 00:11:00.833 END TEST nvmf_connect_stress 00:11:00.833 ************************************ 00:11:01.092 14:15:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:01.092 14:15:52 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:01.092 14:15:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:01.092 14:15:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.092 14:15:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:01.092 ************************************ 00:11:01.092 START TEST nvmf_fused_ordering 00:11:01.092 ************************************ 00:11:01.092 14:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:01.092 * Looking for test storage... 00:11:01.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.092 14:15:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.092 14:15:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:01.092 14:15:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.092 14:15:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.092 14:15:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.092 14:15:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.092 14:15:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.092 14:15:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.092 14:15:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.092 14:15:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.092 14:15:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.092 14:15:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.092 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:01.093 14:15:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:06.436 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.436 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:06.437 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:06.437 Found net devices under 0000:86:00.0: cvl_0_0 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:06.437 Found net devices under 0000:86:00.1: cvl_0_1 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.437 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.696 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.696 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.696 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:06.696 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.696 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.696 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.696 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:06.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:11:06.697 00:11:06.697 --- 10.0.0.2 ping statistics --- 00:11:06.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.697 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:11:06.697 00:11:06.697 --- 10.0.0.1 ping statistics --- 00:11:06.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.697 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2451444 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2451444 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2451444 ']' 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.697 14:15:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:06.697 [2024-07-12 14:15:58.657210] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:11:06.697 [2024-07-12 14:15:58.657260] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.697 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.956 [2024-07-12 14:15:58.716345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.956 [2024-07-12 14:15:58.798005] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.956 [2024-07-12 14:15:58.798039] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.956 [2024-07-12 14:15:58.798046] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.956 [2024-07-12 14:15:58.798052] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.956 [2024-07-12 14:15:58.798057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.956 [2024-07-12 14:15:58.798073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:07.524 [2024-07-12 14:15:59.498126] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:07.524 [2024-07-12 14:15:59.514250] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:07.524 NULL1 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.524 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:07.784 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.784 14:15:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:07.784 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.784 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:07.784 14:15:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.784 14:15:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:07.784 [2024-07-12 14:15:59.567017] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:11:07.784 [2024-07-12 14:15:59.567054] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2451690 ] 00:11:07.784 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.043 Attached to nqn.2016-06.io.spdk:cnode1 00:11:08.043 Namespace ID: 1 size: 1GB 00:11:08.043 fused_ordering(0) 00:11:08.043 fused_ordering(1) 00:11:08.043 fused_ordering(2) 00:11:08.043 fused_ordering(3) 00:11:08.043 fused_ordering(4) 00:11:08.043 fused_ordering(5) 00:11:08.043 fused_ordering(6) 00:11:08.043 fused_ordering(7) 00:11:08.043 fused_ordering(8) 00:11:08.043 fused_ordering(9) 00:11:08.043 fused_ordering(10) 00:11:08.043 fused_ordering(11) 00:11:08.043 fused_ordering(12) 00:11:08.043 fused_ordering(13) 00:11:08.043 fused_ordering(14) 00:11:08.043 fused_ordering(15) 00:11:08.043 fused_ordering(16) 00:11:08.043 fused_ordering(17) 00:11:08.043 fused_ordering(18) 00:11:08.043 fused_ordering(19) 00:11:08.043 fused_ordering(20) 00:11:08.043 fused_ordering(21) 00:11:08.043 fused_ordering(22) 00:11:08.043 fused_ordering(23) 00:11:08.043 fused_ordering(24) 00:11:08.043 fused_ordering(25) 00:11:08.043 fused_ordering(26) 00:11:08.043 fused_ordering(27) 00:11:08.043 fused_ordering(28) 00:11:08.043 fused_ordering(29) 00:11:08.043 fused_ordering(30) 00:11:08.043 fused_ordering(31) 00:11:08.043 fused_ordering(32) 00:11:08.043 fused_ordering(33) 00:11:08.043 fused_ordering(34) 00:11:08.044 fused_ordering(35) 00:11:08.044 fused_ordering(36) 00:11:08.044 fused_ordering(37) 00:11:08.044 fused_ordering(38) 00:11:08.044 fused_ordering(39) 00:11:08.044 fused_ordering(40) 00:11:08.044 fused_ordering(41) 00:11:08.044 fused_ordering(42) 00:11:08.044 fused_ordering(43) 00:11:08.044 fused_ordering(44) 00:11:08.044 fused_ordering(45) 00:11:08.044 fused_ordering(46) 00:11:08.044 fused_ordering(47) 00:11:08.044 fused_ordering(48) 00:11:08.044 fused_ordering(49) 00:11:08.044 fused_ordering(50) 00:11:08.044 fused_ordering(51) 00:11:08.044 fused_ordering(52) 00:11:08.044 fused_ordering(53) 00:11:08.044 fused_ordering(54) 00:11:08.044 fused_ordering(55) 00:11:08.044 fused_ordering(56) 00:11:08.044 fused_ordering(57) 00:11:08.044 fused_ordering(58) 00:11:08.044 fused_ordering(59) 00:11:08.044 fused_ordering(60) 00:11:08.044 fused_ordering(61) 00:11:08.044 fused_ordering(62) 00:11:08.044 fused_ordering(63) 00:11:08.044 fused_ordering(64) 00:11:08.044 fused_ordering(65) 00:11:08.044 fused_ordering(66) 00:11:08.044 fused_ordering(67) 00:11:08.044 fused_ordering(68) 00:11:08.044 fused_ordering(69) 00:11:08.044 fused_ordering(70) 00:11:08.044 fused_ordering(71) 00:11:08.044 fused_ordering(72) 00:11:08.044 fused_ordering(73) 00:11:08.044 fused_ordering(74) 00:11:08.044 fused_ordering(75) 00:11:08.044 fused_ordering(76) 00:11:08.044 fused_ordering(77) 00:11:08.044 fused_ordering(78) 00:11:08.044 fused_ordering(79) 00:11:08.044 fused_ordering(80) 00:11:08.044 fused_ordering(81) 00:11:08.044 fused_ordering(82) 00:11:08.044 fused_ordering(83) 00:11:08.044 fused_ordering(84) 00:11:08.044 fused_ordering(85) 00:11:08.044 fused_ordering(86) 00:11:08.044 fused_ordering(87) 00:11:08.044 fused_ordering(88) 00:11:08.044 fused_ordering(89) 00:11:08.044 fused_ordering(90) 00:11:08.044 fused_ordering(91) 00:11:08.044 fused_ordering(92) 00:11:08.044 fused_ordering(93) 00:11:08.044 fused_ordering(94) 00:11:08.044 fused_ordering(95) 00:11:08.044 fused_ordering(96) 00:11:08.044 fused_ordering(97) 00:11:08.044 fused_ordering(98) 00:11:08.044 fused_ordering(99) 00:11:08.044 fused_ordering(100) 00:11:08.044 fused_ordering(101) 00:11:08.044 fused_ordering(102) 00:11:08.044 fused_ordering(103) 00:11:08.044 fused_ordering(104) 00:11:08.044 fused_ordering(105) 00:11:08.044 fused_ordering(106) 00:11:08.044 fused_ordering(107) 00:11:08.044 fused_ordering(108) 00:11:08.044 fused_ordering(109) 00:11:08.044 fused_ordering(110) 00:11:08.044 fused_ordering(111) 00:11:08.044 fused_ordering(112) 00:11:08.044 fused_ordering(113) 00:11:08.044 fused_ordering(114) 00:11:08.044 fused_ordering(115) 00:11:08.044 fused_ordering(116) 00:11:08.044 fused_ordering(117) 00:11:08.044 fused_ordering(118) 00:11:08.044 fused_ordering(119) 00:11:08.044 fused_ordering(120) 00:11:08.044 fused_ordering(121) 00:11:08.044 fused_ordering(122) 00:11:08.044 fused_ordering(123) 00:11:08.044 fused_ordering(124) 00:11:08.044 fused_ordering(125) 00:11:08.044 fused_ordering(126) 00:11:08.044 fused_ordering(127) 00:11:08.044 fused_ordering(128) 00:11:08.044 fused_ordering(129) 00:11:08.044 fused_ordering(130) 00:11:08.044 fused_ordering(131) 00:11:08.044 fused_ordering(132) 00:11:08.044 fused_ordering(133) 00:11:08.044 fused_ordering(134) 00:11:08.044 fused_ordering(135) 00:11:08.044 fused_ordering(136) 00:11:08.044 fused_ordering(137) 00:11:08.044 fused_ordering(138) 00:11:08.044 fused_ordering(139) 00:11:08.044 fused_ordering(140) 00:11:08.044 fused_ordering(141) 00:11:08.044 fused_ordering(142) 00:11:08.044 fused_ordering(143) 00:11:08.044 fused_ordering(144) 00:11:08.044 fused_ordering(145) 00:11:08.044 fused_ordering(146) 00:11:08.044 fused_ordering(147) 00:11:08.044 fused_ordering(148) 00:11:08.044 fused_ordering(149) 00:11:08.044 fused_ordering(150) 00:11:08.044 fused_ordering(151) 00:11:08.044 fused_ordering(152) 00:11:08.044 fused_ordering(153) 00:11:08.044 fused_ordering(154) 00:11:08.044 fused_ordering(155) 00:11:08.044 fused_ordering(156) 00:11:08.044 fused_ordering(157) 00:11:08.044 fused_ordering(158) 00:11:08.044 fused_ordering(159) 00:11:08.044 fused_ordering(160) 00:11:08.044 fused_ordering(161) 00:11:08.044 fused_ordering(162) 00:11:08.044 fused_ordering(163) 00:11:08.044 fused_ordering(164) 00:11:08.044 fused_ordering(165) 00:11:08.044 fused_ordering(166) 00:11:08.044 fused_ordering(167) 00:11:08.044 fused_ordering(168) 00:11:08.044 fused_ordering(169) 00:11:08.044 fused_ordering(170) 00:11:08.044 fused_ordering(171) 00:11:08.044 fused_ordering(172) 00:11:08.044 fused_ordering(173) 00:11:08.044 fused_ordering(174) 00:11:08.044 fused_ordering(175) 00:11:08.044 fused_ordering(176) 00:11:08.044 fused_ordering(177) 00:11:08.044 fused_ordering(178) 00:11:08.044 fused_ordering(179) 00:11:08.044 fused_ordering(180) 00:11:08.044 fused_ordering(181) 00:11:08.044 fused_ordering(182) 00:11:08.044 fused_ordering(183) 00:11:08.044 fused_ordering(184) 00:11:08.044 fused_ordering(185) 00:11:08.044 fused_ordering(186) 00:11:08.044 fused_ordering(187) 00:11:08.044 fused_ordering(188) 00:11:08.044 fused_ordering(189) 00:11:08.044 fused_ordering(190) 00:11:08.044 fused_ordering(191) 00:11:08.044 fused_ordering(192) 00:11:08.044 fused_ordering(193) 00:11:08.044 fused_ordering(194) 00:11:08.044 fused_ordering(195) 00:11:08.044 fused_ordering(196) 00:11:08.044 fused_ordering(197) 00:11:08.044 fused_ordering(198) 00:11:08.044 fused_ordering(199) 00:11:08.044 fused_ordering(200) 00:11:08.044 fused_ordering(201) 00:11:08.044 fused_ordering(202) 00:11:08.044 fused_ordering(203) 00:11:08.044 fused_ordering(204) 00:11:08.044 fused_ordering(205) 00:11:08.303 fused_ordering(206) 00:11:08.303 fused_ordering(207) 00:11:08.303 fused_ordering(208) 00:11:08.303 fused_ordering(209) 00:11:08.303 fused_ordering(210) 00:11:08.303 fused_ordering(211) 00:11:08.303 fused_ordering(212) 00:11:08.303 fused_ordering(213) 00:11:08.303 fused_ordering(214) 00:11:08.303 fused_ordering(215) 00:11:08.303 fused_ordering(216) 00:11:08.303 fused_ordering(217) 00:11:08.303 fused_ordering(218) 00:11:08.303 fused_ordering(219) 00:11:08.303 fused_ordering(220) 00:11:08.303 fused_ordering(221) 00:11:08.303 fused_ordering(222) 00:11:08.303 fused_ordering(223) 00:11:08.303 fused_ordering(224) 00:11:08.303 fused_ordering(225) 00:11:08.303 fused_ordering(226) 00:11:08.303 fused_ordering(227) 00:11:08.303 fused_ordering(228) 00:11:08.303 fused_ordering(229) 00:11:08.303 fused_ordering(230) 00:11:08.303 fused_ordering(231) 00:11:08.303 fused_ordering(232) 00:11:08.303 fused_ordering(233) 00:11:08.303 fused_ordering(234) 00:11:08.304 fused_ordering(235) 00:11:08.304 fused_ordering(236) 00:11:08.304 fused_ordering(237) 00:11:08.304 fused_ordering(238) 00:11:08.304 fused_ordering(239) 00:11:08.304 fused_ordering(240) 00:11:08.304 fused_ordering(241) 00:11:08.304 fused_ordering(242) 00:11:08.304 fused_ordering(243) 00:11:08.304 fused_ordering(244) 00:11:08.304 fused_ordering(245) 00:11:08.304 fused_ordering(246) 00:11:08.304 fused_ordering(247) 00:11:08.304 fused_ordering(248) 00:11:08.304 fused_ordering(249) 00:11:08.304 fused_ordering(250) 00:11:08.304 fused_ordering(251) 00:11:08.304 fused_ordering(252) 00:11:08.304 fused_ordering(253) 00:11:08.304 fused_ordering(254) 00:11:08.304 fused_ordering(255) 00:11:08.304 fused_ordering(256) 00:11:08.304 fused_ordering(257) 00:11:08.304 fused_ordering(258) 00:11:08.304 fused_ordering(259) 00:11:08.304 fused_ordering(260) 00:11:08.304 fused_ordering(261) 00:11:08.304 fused_ordering(262) 00:11:08.304 fused_ordering(263) 00:11:08.304 fused_ordering(264) 00:11:08.304 fused_ordering(265) 00:11:08.304 fused_ordering(266) 00:11:08.304 fused_ordering(267) 00:11:08.304 fused_ordering(268) 00:11:08.304 fused_ordering(269) 00:11:08.304 fused_ordering(270) 00:11:08.304 fused_ordering(271) 00:11:08.304 fused_ordering(272) 00:11:08.304 fused_ordering(273) 00:11:08.304 fused_ordering(274) 00:11:08.304 fused_ordering(275) 00:11:08.304 fused_ordering(276) 00:11:08.304 fused_ordering(277) 00:11:08.304 fused_ordering(278) 00:11:08.304 fused_ordering(279) 00:11:08.304 fused_ordering(280) 00:11:08.304 fused_ordering(281) 00:11:08.304 fused_ordering(282) 00:11:08.304 fused_ordering(283) 00:11:08.304 fused_ordering(284) 00:11:08.304 fused_ordering(285) 00:11:08.304 fused_ordering(286) 00:11:08.304 fused_ordering(287) 00:11:08.304 fused_ordering(288) 00:11:08.304 fused_ordering(289) 00:11:08.304 fused_ordering(290) 00:11:08.304 fused_ordering(291) 00:11:08.304 fused_ordering(292) 00:11:08.304 fused_ordering(293) 00:11:08.304 fused_ordering(294) 00:11:08.304 fused_ordering(295) 00:11:08.304 fused_ordering(296) 00:11:08.304 fused_ordering(297) 00:11:08.304 fused_ordering(298) 00:11:08.304 fused_ordering(299) 00:11:08.304 fused_ordering(300) 00:11:08.304 fused_ordering(301) 00:11:08.304 fused_ordering(302) 00:11:08.304 fused_ordering(303) 00:11:08.304 fused_ordering(304) 00:11:08.304 fused_ordering(305) 00:11:08.304 fused_ordering(306) 00:11:08.304 fused_ordering(307) 00:11:08.304 fused_ordering(308) 00:11:08.304 fused_ordering(309) 00:11:08.304 fused_ordering(310) 00:11:08.304 fused_ordering(311) 00:11:08.304 fused_ordering(312) 00:11:08.304 fused_ordering(313) 00:11:08.304 fused_ordering(314) 00:11:08.304 fused_ordering(315) 00:11:08.304 fused_ordering(316) 00:11:08.304 fused_ordering(317) 00:11:08.304 fused_ordering(318) 00:11:08.304 fused_ordering(319) 00:11:08.304 fused_ordering(320) 00:11:08.304 fused_ordering(321) 00:11:08.304 fused_ordering(322) 00:11:08.304 fused_ordering(323) 00:11:08.304 fused_ordering(324) 00:11:08.304 fused_ordering(325) 00:11:08.304 fused_ordering(326) 00:11:08.304 fused_ordering(327) 00:11:08.304 fused_ordering(328) 00:11:08.304 fused_ordering(329) 00:11:08.304 fused_ordering(330) 00:11:08.304 fused_ordering(331) 00:11:08.304 fused_ordering(332) 00:11:08.304 fused_ordering(333) 00:11:08.304 fused_ordering(334) 00:11:08.304 fused_ordering(335) 00:11:08.304 fused_ordering(336) 00:11:08.304 fused_ordering(337) 00:11:08.304 fused_ordering(338) 00:11:08.304 fused_ordering(339) 00:11:08.304 fused_ordering(340) 00:11:08.304 fused_ordering(341) 00:11:08.304 fused_ordering(342) 00:11:08.304 fused_ordering(343) 00:11:08.304 fused_ordering(344) 00:11:08.304 fused_ordering(345) 00:11:08.304 fused_ordering(346) 00:11:08.304 fused_ordering(347) 00:11:08.304 fused_ordering(348) 00:11:08.304 fused_ordering(349) 00:11:08.304 fused_ordering(350) 00:11:08.304 fused_ordering(351) 00:11:08.304 fused_ordering(352) 00:11:08.304 fused_ordering(353) 00:11:08.304 fused_ordering(354) 00:11:08.304 fused_ordering(355) 00:11:08.304 fused_ordering(356) 00:11:08.304 fused_ordering(357) 00:11:08.304 fused_ordering(358) 00:11:08.304 fused_ordering(359) 00:11:08.304 fused_ordering(360) 00:11:08.304 fused_ordering(361) 00:11:08.304 fused_ordering(362) 00:11:08.304 fused_ordering(363) 00:11:08.304 fused_ordering(364) 00:11:08.304 fused_ordering(365) 00:11:08.304 fused_ordering(366) 00:11:08.304 fused_ordering(367) 00:11:08.304 fused_ordering(368) 00:11:08.304 fused_ordering(369) 00:11:08.304 fused_ordering(370) 00:11:08.304 fused_ordering(371) 00:11:08.304 fused_ordering(372) 00:11:08.304 fused_ordering(373) 00:11:08.304 fused_ordering(374) 00:11:08.304 fused_ordering(375) 00:11:08.304 fused_ordering(376) 00:11:08.304 fused_ordering(377) 00:11:08.304 fused_ordering(378) 00:11:08.304 fused_ordering(379) 00:11:08.304 fused_ordering(380) 00:11:08.304 fused_ordering(381) 00:11:08.304 fused_ordering(382) 00:11:08.304 fused_ordering(383) 00:11:08.304 fused_ordering(384) 00:11:08.304 fused_ordering(385) 00:11:08.304 fused_ordering(386) 00:11:08.304 fused_ordering(387) 00:11:08.304 fused_ordering(388) 00:11:08.304 fused_ordering(389) 00:11:08.304 fused_ordering(390) 00:11:08.304 fused_ordering(391) 00:11:08.304 fused_ordering(392) 00:11:08.304 fused_ordering(393) 00:11:08.304 fused_ordering(394) 00:11:08.304 fused_ordering(395) 00:11:08.304 fused_ordering(396) 00:11:08.304 fused_ordering(397) 00:11:08.304 fused_ordering(398) 00:11:08.304 fused_ordering(399) 00:11:08.304 fused_ordering(400) 00:11:08.304 fused_ordering(401) 00:11:08.304 fused_ordering(402) 00:11:08.304 fused_ordering(403) 00:11:08.304 fused_ordering(404) 00:11:08.304 fused_ordering(405) 00:11:08.304 fused_ordering(406) 00:11:08.304 fused_ordering(407) 00:11:08.304 fused_ordering(408) 00:11:08.304 fused_ordering(409) 00:11:08.304 fused_ordering(410) 00:11:08.564 fused_ordering(411) 00:11:08.564 fused_ordering(412) 00:11:08.564 fused_ordering(413) 00:11:08.564 fused_ordering(414) 00:11:08.564 fused_ordering(415) 00:11:08.564 fused_ordering(416) 00:11:08.564 fused_ordering(417) 00:11:08.564 fused_ordering(418) 00:11:08.564 fused_ordering(419) 00:11:08.564 fused_ordering(420) 00:11:08.564 fused_ordering(421) 00:11:08.564 fused_ordering(422) 00:11:08.564 fused_ordering(423) 00:11:08.564 fused_ordering(424) 00:11:08.564 fused_ordering(425) 00:11:08.564 fused_ordering(426) 00:11:08.564 fused_ordering(427) 00:11:08.564 fused_ordering(428) 00:11:08.564 fused_ordering(429) 00:11:08.564 fused_ordering(430) 00:11:08.564 fused_ordering(431) 00:11:08.564 fused_ordering(432) 00:11:08.564 fused_ordering(433) 00:11:08.564 fused_ordering(434) 00:11:08.564 fused_ordering(435) 00:11:08.564 fused_ordering(436) 00:11:08.564 fused_ordering(437) 00:11:08.564 fused_ordering(438) 00:11:08.564 fused_ordering(439) 00:11:08.564 fused_ordering(440) 00:11:08.564 fused_ordering(441) 00:11:08.564 fused_ordering(442) 00:11:08.564 fused_ordering(443) 00:11:08.564 fused_ordering(444) 00:11:08.564 fused_ordering(445) 00:11:08.564 fused_ordering(446) 00:11:08.564 fused_ordering(447) 00:11:08.564 fused_ordering(448) 00:11:08.564 fused_ordering(449) 00:11:08.564 fused_ordering(450) 00:11:08.564 fused_ordering(451) 00:11:08.564 fused_ordering(452) 00:11:08.564 fused_ordering(453) 00:11:08.564 fused_ordering(454) 00:11:08.564 fused_ordering(455) 00:11:08.564 fused_ordering(456) 00:11:08.564 fused_ordering(457) 00:11:08.564 fused_ordering(458) 00:11:08.564 fused_ordering(459) 00:11:08.564 fused_ordering(460) 00:11:08.564 fused_ordering(461) 00:11:08.564 fused_ordering(462) 00:11:08.564 fused_ordering(463) 00:11:08.564 fused_ordering(464) 00:11:08.564 fused_ordering(465) 00:11:08.564 fused_ordering(466) 00:11:08.564 fused_ordering(467) 00:11:08.564 fused_ordering(468) 00:11:08.564 fused_ordering(469) 00:11:08.564 fused_ordering(470) 00:11:08.564 fused_ordering(471) 00:11:08.564 fused_ordering(472) 00:11:08.564 fused_ordering(473) 00:11:08.564 fused_ordering(474) 00:11:08.564 fused_ordering(475) 00:11:08.564 fused_ordering(476) 00:11:08.564 fused_ordering(477) 00:11:08.564 fused_ordering(478) 00:11:08.564 fused_ordering(479) 00:11:08.564 fused_ordering(480) 00:11:08.564 fused_ordering(481) 00:11:08.564 fused_ordering(482) 00:11:08.564 fused_ordering(483) 00:11:08.564 fused_ordering(484) 00:11:08.564 fused_ordering(485) 00:11:08.564 fused_ordering(486) 00:11:08.564 fused_ordering(487) 00:11:08.564 fused_ordering(488) 00:11:08.564 fused_ordering(489) 00:11:08.564 fused_ordering(490) 00:11:08.564 fused_ordering(491) 00:11:08.564 fused_ordering(492) 00:11:08.564 fused_ordering(493) 00:11:08.564 fused_ordering(494) 00:11:08.564 fused_ordering(495) 00:11:08.564 fused_ordering(496) 00:11:08.564 fused_ordering(497) 00:11:08.564 fused_ordering(498) 00:11:08.564 fused_ordering(499) 00:11:08.564 fused_ordering(500) 00:11:08.564 fused_ordering(501) 00:11:08.564 fused_ordering(502) 00:11:08.564 fused_ordering(503) 00:11:08.564 fused_ordering(504) 00:11:08.564 fused_ordering(505) 00:11:08.564 fused_ordering(506) 00:11:08.564 fused_ordering(507) 00:11:08.564 fused_ordering(508) 00:11:08.564 fused_ordering(509) 00:11:08.564 fused_ordering(510) 00:11:08.564 fused_ordering(511) 00:11:08.564 fused_ordering(512) 00:11:08.564 fused_ordering(513) 00:11:08.564 fused_ordering(514) 00:11:08.564 fused_ordering(515) 00:11:08.564 fused_ordering(516) 00:11:08.564 fused_ordering(517) 00:11:08.564 fused_ordering(518) 00:11:08.564 fused_ordering(519) 00:11:08.564 fused_ordering(520) 00:11:08.564 fused_ordering(521) 00:11:08.564 fused_ordering(522) 00:11:08.564 fused_ordering(523) 00:11:08.564 fused_ordering(524) 00:11:08.564 fused_ordering(525) 00:11:08.564 fused_ordering(526) 00:11:08.564 fused_ordering(527) 00:11:08.564 fused_ordering(528) 00:11:08.564 fused_ordering(529) 00:11:08.564 fused_ordering(530) 00:11:08.564 fused_ordering(531) 00:11:08.564 fused_ordering(532) 00:11:08.564 fused_ordering(533) 00:11:08.564 fused_ordering(534) 00:11:08.564 fused_ordering(535) 00:11:08.564 fused_ordering(536) 00:11:08.564 fused_ordering(537) 00:11:08.564 fused_ordering(538) 00:11:08.564 fused_ordering(539) 00:11:08.564 fused_ordering(540) 00:11:08.564 fused_ordering(541) 00:11:08.564 fused_ordering(542) 00:11:08.564 fused_ordering(543) 00:11:08.564 fused_ordering(544) 00:11:08.564 fused_ordering(545) 00:11:08.564 fused_ordering(546) 00:11:08.564 fused_ordering(547) 00:11:08.564 fused_ordering(548) 00:11:08.564 fused_ordering(549) 00:11:08.564 fused_ordering(550) 00:11:08.564 fused_ordering(551) 00:11:08.564 fused_ordering(552) 00:11:08.564 fused_ordering(553) 00:11:08.564 fused_ordering(554) 00:11:08.564 fused_ordering(555) 00:11:08.564 fused_ordering(556) 00:11:08.564 fused_ordering(557) 00:11:08.564 fused_ordering(558) 00:11:08.564 fused_ordering(559) 00:11:08.564 fused_ordering(560) 00:11:08.564 fused_ordering(561) 00:11:08.564 fused_ordering(562) 00:11:08.564 fused_ordering(563) 00:11:08.564 fused_ordering(564) 00:11:08.564 fused_ordering(565) 00:11:08.564 fused_ordering(566) 00:11:08.564 fused_ordering(567) 00:11:08.564 fused_ordering(568) 00:11:08.564 fused_ordering(569) 00:11:08.564 fused_ordering(570) 00:11:08.564 fused_ordering(571) 00:11:08.564 fused_ordering(572) 00:11:08.564 fused_ordering(573) 00:11:08.564 fused_ordering(574) 00:11:08.564 fused_ordering(575) 00:11:08.564 fused_ordering(576) 00:11:08.564 fused_ordering(577) 00:11:08.564 fused_ordering(578) 00:11:08.564 fused_ordering(579) 00:11:08.564 fused_ordering(580) 00:11:08.564 fused_ordering(581) 00:11:08.564 fused_ordering(582) 00:11:08.564 fused_ordering(583) 00:11:08.564 fused_ordering(584) 00:11:08.564 fused_ordering(585) 00:11:08.564 fused_ordering(586) 00:11:08.564 fused_ordering(587) 00:11:08.564 fused_ordering(588) 00:11:08.564 fused_ordering(589) 00:11:08.564 fused_ordering(590) 00:11:08.564 fused_ordering(591) 00:11:08.564 fused_ordering(592) 00:11:08.564 fused_ordering(593) 00:11:08.564 fused_ordering(594) 00:11:08.564 fused_ordering(595) 00:11:08.564 fused_ordering(596) 00:11:08.564 fused_ordering(597) 00:11:08.564 fused_ordering(598) 00:11:08.564 fused_ordering(599) 00:11:08.564 fused_ordering(600) 00:11:08.564 fused_ordering(601) 00:11:08.564 fused_ordering(602) 00:11:08.564 fused_ordering(603) 00:11:08.564 fused_ordering(604) 00:11:08.564 fused_ordering(605) 00:11:08.564 fused_ordering(606) 00:11:08.564 fused_ordering(607) 00:11:08.564 fused_ordering(608) 00:11:08.564 fused_ordering(609) 00:11:08.564 fused_ordering(610) 00:11:08.564 fused_ordering(611) 00:11:08.564 fused_ordering(612) 00:11:08.564 fused_ordering(613) 00:11:08.564 fused_ordering(614) 00:11:08.564 fused_ordering(615) 00:11:09.133 fused_ordering(616) 00:11:09.133 fused_ordering(617) 00:11:09.133 fused_ordering(618) 00:11:09.133 fused_ordering(619) 00:11:09.133 fused_ordering(620) 00:11:09.133 fused_ordering(621) 00:11:09.133 fused_ordering(622) 00:11:09.133 fused_ordering(623) 00:11:09.133 fused_ordering(624) 00:11:09.133 fused_ordering(625) 00:11:09.133 fused_ordering(626) 00:11:09.133 fused_ordering(627) 00:11:09.133 fused_ordering(628) 00:11:09.133 fused_ordering(629) 00:11:09.133 fused_ordering(630) 00:11:09.133 fused_ordering(631) 00:11:09.133 fused_ordering(632) 00:11:09.133 fused_ordering(633) 00:11:09.133 fused_ordering(634) 00:11:09.133 fused_ordering(635) 00:11:09.133 fused_ordering(636) 00:11:09.133 fused_ordering(637) 00:11:09.133 fused_ordering(638) 00:11:09.133 fused_ordering(639) 00:11:09.133 fused_ordering(640) 00:11:09.133 fused_ordering(641) 00:11:09.133 fused_ordering(642) 00:11:09.133 fused_ordering(643) 00:11:09.133 fused_ordering(644) 00:11:09.133 fused_ordering(645) 00:11:09.133 fused_ordering(646) 00:11:09.133 fused_ordering(647) 00:11:09.133 fused_ordering(648) 00:11:09.133 fused_ordering(649) 00:11:09.133 fused_ordering(650) 00:11:09.133 fused_ordering(651) 00:11:09.133 fused_ordering(652) 00:11:09.133 fused_ordering(653) 00:11:09.133 fused_ordering(654) 00:11:09.133 fused_ordering(655) 00:11:09.133 fused_ordering(656) 00:11:09.133 fused_ordering(657) 00:11:09.133 fused_ordering(658) 00:11:09.133 fused_ordering(659) 00:11:09.133 fused_ordering(660) 00:11:09.133 fused_ordering(661) 00:11:09.133 fused_ordering(662) 00:11:09.133 fused_ordering(663) 00:11:09.133 fused_ordering(664) 00:11:09.133 fused_ordering(665) 00:11:09.133 fused_ordering(666) 00:11:09.133 fused_ordering(667) 00:11:09.133 fused_ordering(668) 00:11:09.133 fused_ordering(669) 00:11:09.133 fused_ordering(670) 00:11:09.133 fused_ordering(671) 00:11:09.133 fused_ordering(672) 00:11:09.133 fused_ordering(673) 00:11:09.133 fused_ordering(674) 00:11:09.133 fused_ordering(675) 00:11:09.133 fused_ordering(676) 00:11:09.133 fused_ordering(677) 00:11:09.133 fused_ordering(678) 00:11:09.133 fused_ordering(679) 00:11:09.133 fused_ordering(680) 00:11:09.133 fused_ordering(681) 00:11:09.133 fused_ordering(682) 00:11:09.133 fused_ordering(683) 00:11:09.133 fused_ordering(684) 00:11:09.133 fused_ordering(685) 00:11:09.133 fused_ordering(686) 00:11:09.133 fused_ordering(687) 00:11:09.133 fused_ordering(688) 00:11:09.133 fused_ordering(689) 00:11:09.133 fused_ordering(690) 00:11:09.133 fused_ordering(691) 00:11:09.133 fused_ordering(692) 00:11:09.133 fused_ordering(693) 00:11:09.133 fused_ordering(694) 00:11:09.133 fused_ordering(695) 00:11:09.133 fused_ordering(696) 00:11:09.133 fused_ordering(697) 00:11:09.133 fused_ordering(698) 00:11:09.133 fused_ordering(699) 00:11:09.133 fused_ordering(700) 00:11:09.133 fused_ordering(701) 00:11:09.133 fused_ordering(702) 00:11:09.133 fused_ordering(703) 00:11:09.133 fused_ordering(704) 00:11:09.133 fused_ordering(705) 00:11:09.133 fused_ordering(706) 00:11:09.133 fused_ordering(707) 00:11:09.133 fused_ordering(708) 00:11:09.133 fused_ordering(709) 00:11:09.133 fused_ordering(710) 00:11:09.133 fused_ordering(711) 00:11:09.133 fused_ordering(712) 00:11:09.133 fused_ordering(713) 00:11:09.133 fused_ordering(714) 00:11:09.133 fused_ordering(715) 00:11:09.133 fused_ordering(716) 00:11:09.133 fused_ordering(717) 00:11:09.133 fused_ordering(718) 00:11:09.133 fused_ordering(719) 00:11:09.133 fused_ordering(720) 00:11:09.133 fused_ordering(721) 00:11:09.133 fused_ordering(722) 00:11:09.133 fused_ordering(723) 00:11:09.133 fused_ordering(724) 00:11:09.133 fused_ordering(725) 00:11:09.133 fused_ordering(726) 00:11:09.133 fused_ordering(727) 00:11:09.133 fused_ordering(728) 00:11:09.133 fused_ordering(729) 00:11:09.133 fused_ordering(730) 00:11:09.133 fused_ordering(731) 00:11:09.133 fused_ordering(732) 00:11:09.134 fused_ordering(733) 00:11:09.134 fused_ordering(734) 00:11:09.134 fused_ordering(735) 00:11:09.134 fused_ordering(736) 00:11:09.134 fused_ordering(737) 00:11:09.134 fused_ordering(738) 00:11:09.134 fused_ordering(739) 00:11:09.134 fused_ordering(740) 00:11:09.134 fused_ordering(741) 00:11:09.134 fused_ordering(742) 00:11:09.134 fused_ordering(743) 00:11:09.134 fused_ordering(744) 00:11:09.134 fused_ordering(745) 00:11:09.134 fused_ordering(746) 00:11:09.134 fused_ordering(747) 00:11:09.134 fused_ordering(748) 00:11:09.134 fused_ordering(749) 00:11:09.134 fused_ordering(750) 00:11:09.134 fused_ordering(751) 00:11:09.134 fused_ordering(752) 00:11:09.134 fused_ordering(753) 00:11:09.134 fused_ordering(754) 00:11:09.134 fused_ordering(755) 00:11:09.134 fused_ordering(756) 00:11:09.134 fused_ordering(757) 00:11:09.134 fused_ordering(758) 00:11:09.134 fused_ordering(759) 00:11:09.134 fused_ordering(760) 00:11:09.134 fused_ordering(761) 00:11:09.134 fused_ordering(762) 00:11:09.134 fused_ordering(763) 00:11:09.134 fused_ordering(764) 00:11:09.134 fused_ordering(765) 00:11:09.134 fused_ordering(766) 00:11:09.134 fused_ordering(767) 00:11:09.134 fused_ordering(768) 00:11:09.134 fused_ordering(769) 00:11:09.134 fused_ordering(770) 00:11:09.134 fused_ordering(771) 00:11:09.134 fused_ordering(772) 00:11:09.134 fused_ordering(773) 00:11:09.134 fused_ordering(774) 00:11:09.134 fused_ordering(775) 00:11:09.134 fused_ordering(776) 00:11:09.134 fused_ordering(777) 00:11:09.134 fused_ordering(778) 00:11:09.134 fused_ordering(779) 00:11:09.134 fused_ordering(780) 00:11:09.134 fused_ordering(781) 00:11:09.134 fused_ordering(782) 00:11:09.134 fused_ordering(783) 00:11:09.134 fused_ordering(784) 00:11:09.134 fused_ordering(785) 00:11:09.134 fused_ordering(786) 00:11:09.134 fused_ordering(787) 00:11:09.134 fused_ordering(788) 00:11:09.134 fused_ordering(789) 00:11:09.134 fused_ordering(790) 00:11:09.134 fused_ordering(791) 00:11:09.134 fused_ordering(792) 00:11:09.134 fused_ordering(793) 00:11:09.134 fused_ordering(794) 00:11:09.134 fused_ordering(795) 00:11:09.134 fused_ordering(796) 00:11:09.134 fused_ordering(797) 00:11:09.134 fused_ordering(798) 00:11:09.134 fused_ordering(799) 00:11:09.134 fused_ordering(800) 00:11:09.134 fused_ordering(801) 00:11:09.134 fused_ordering(802) 00:11:09.134 fused_ordering(803) 00:11:09.134 fused_ordering(804) 00:11:09.134 fused_ordering(805) 00:11:09.134 fused_ordering(806) 00:11:09.134 fused_ordering(807) 00:11:09.134 fused_ordering(808) 00:11:09.134 fused_ordering(809) 00:11:09.134 fused_ordering(810) 00:11:09.134 fused_ordering(811) 00:11:09.134 fused_ordering(812) 00:11:09.134 fused_ordering(813) 00:11:09.134 fused_ordering(814) 00:11:09.134 fused_ordering(815) 00:11:09.134 fused_ordering(816) 00:11:09.134 fused_ordering(817) 00:11:09.134 fused_ordering(818) 00:11:09.134 fused_ordering(819) 00:11:09.134 fused_ordering(820) 00:11:09.393 fused_ordering(821) 00:11:09.393 fused_ordering(822) 00:11:09.393 fused_ordering(823) 00:11:09.393 fused_ordering(824) 00:11:09.393 fused_ordering(825) 00:11:09.393 fused_ordering(826) 00:11:09.393 fused_ordering(827) 00:11:09.393 fused_ordering(828) 00:11:09.393 fused_ordering(829) 00:11:09.393 fused_ordering(830) 00:11:09.393 fused_ordering(831) 00:11:09.393 fused_ordering(832) 00:11:09.393 fused_ordering(833) 00:11:09.393 fused_ordering(834) 00:11:09.393 fused_ordering(835) 00:11:09.393 fused_ordering(836) 00:11:09.393 fused_ordering(837) 00:11:09.393 fused_ordering(838) 00:11:09.393 fused_ordering(839) 00:11:09.393 fused_ordering(840) 00:11:09.393 fused_ordering(841) 00:11:09.393 fused_ordering(842) 00:11:09.393 fused_ordering(843) 00:11:09.393 fused_ordering(844) 00:11:09.393 fused_ordering(845) 00:11:09.393 fused_ordering(846) 00:11:09.393 fused_ordering(847) 00:11:09.393 fused_ordering(848) 00:11:09.393 fused_ordering(849) 00:11:09.393 fused_ordering(850) 00:11:09.393 fused_ordering(851) 00:11:09.393 fused_ordering(852) 00:11:09.393 fused_ordering(853) 00:11:09.393 fused_ordering(854) 00:11:09.393 fused_ordering(855) 00:11:09.393 fused_ordering(856) 00:11:09.393 fused_ordering(857) 00:11:09.393 fused_ordering(858) 00:11:09.393 fused_ordering(859) 00:11:09.393 fused_ordering(860) 00:11:09.393 fused_ordering(861) 00:11:09.393 fused_ordering(862) 00:11:09.393 fused_ordering(863) 00:11:09.393 fused_ordering(864) 00:11:09.393 fused_ordering(865) 00:11:09.393 fused_ordering(866) 00:11:09.393 fused_ordering(867) 00:11:09.393 fused_ordering(868) 00:11:09.393 fused_ordering(869) 00:11:09.393 fused_ordering(870) 00:11:09.393 fused_ordering(871) 00:11:09.393 fused_ordering(872) 00:11:09.393 fused_ordering(873) 00:11:09.393 fused_ordering(874) 00:11:09.393 fused_ordering(875) 00:11:09.393 fused_ordering(876) 00:11:09.393 fused_ordering(877) 00:11:09.393 fused_ordering(878) 00:11:09.393 fused_ordering(879) 00:11:09.393 fused_ordering(880) 00:11:09.393 fused_ordering(881) 00:11:09.393 fused_ordering(882) 00:11:09.393 fused_ordering(883) 00:11:09.393 fused_ordering(884) 00:11:09.393 fused_ordering(885) 00:11:09.393 fused_ordering(886) 00:11:09.393 fused_ordering(887) 00:11:09.393 fused_ordering(888) 00:11:09.393 fused_ordering(889) 00:11:09.393 fused_ordering(890) 00:11:09.393 fused_ordering(891) 00:11:09.393 fused_ordering(892) 00:11:09.393 fused_ordering(893) 00:11:09.393 fused_ordering(894) 00:11:09.393 fused_ordering(895) 00:11:09.393 fused_ordering(896) 00:11:09.393 fused_ordering(897) 00:11:09.393 fused_ordering(898) 00:11:09.393 fused_ordering(899) 00:11:09.393 fused_ordering(900) 00:11:09.393 fused_ordering(901) 00:11:09.393 fused_ordering(902) 00:11:09.393 fused_ordering(903) 00:11:09.393 fused_ordering(904) 00:11:09.393 fused_ordering(905) 00:11:09.393 fused_ordering(906) 00:11:09.393 fused_ordering(907) 00:11:09.393 fused_ordering(908) 00:11:09.393 fused_ordering(909) 00:11:09.393 fused_ordering(910) 00:11:09.393 fused_ordering(911) 00:11:09.393 fused_ordering(912) 00:11:09.393 fused_ordering(913) 00:11:09.393 fused_ordering(914) 00:11:09.393 fused_ordering(915) 00:11:09.393 fused_ordering(916) 00:11:09.393 fused_ordering(917) 00:11:09.393 fused_ordering(918) 00:11:09.393 fused_ordering(919) 00:11:09.393 fused_ordering(920) 00:11:09.393 fused_ordering(921) 00:11:09.393 fused_ordering(922) 00:11:09.393 fused_ordering(923) 00:11:09.393 fused_ordering(924) 00:11:09.393 fused_ordering(925) 00:11:09.393 fused_ordering(926) 00:11:09.393 fused_ordering(927) 00:11:09.393 fused_ordering(928) 00:11:09.393 fused_ordering(929) 00:11:09.393 fused_ordering(930) 00:11:09.393 fused_ordering(931) 00:11:09.393 fused_ordering(932) 00:11:09.393 fused_ordering(933) 00:11:09.393 fused_ordering(934) 00:11:09.393 fused_ordering(935) 00:11:09.393 fused_ordering(936) 00:11:09.393 fused_ordering(937) 00:11:09.393 fused_ordering(938) 00:11:09.393 fused_ordering(939) 00:11:09.393 fused_ordering(940) 00:11:09.393 fused_ordering(941) 00:11:09.393 fused_ordering(942) 00:11:09.393 fused_ordering(943) 00:11:09.393 fused_ordering(944) 00:11:09.393 fused_ordering(945) 00:11:09.393 fused_ordering(946) 00:11:09.393 fused_ordering(947) 00:11:09.393 fused_ordering(948) 00:11:09.393 fused_ordering(949) 00:11:09.393 fused_ordering(950) 00:11:09.393 fused_ordering(951) 00:11:09.393 fused_ordering(952) 00:11:09.393 fused_ordering(953) 00:11:09.393 fused_ordering(954) 00:11:09.393 fused_ordering(955) 00:11:09.393 fused_ordering(956) 00:11:09.393 fused_ordering(957) 00:11:09.393 fused_ordering(958) 00:11:09.393 fused_ordering(959) 00:11:09.393 fused_ordering(960) 00:11:09.393 fused_ordering(961) 00:11:09.393 fused_ordering(962) 00:11:09.393 fused_ordering(963) 00:11:09.393 fused_ordering(964) 00:11:09.393 fused_ordering(965) 00:11:09.393 fused_ordering(966) 00:11:09.393 fused_ordering(967) 00:11:09.393 fused_ordering(968) 00:11:09.393 fused_ordering(969) 00:11:09.393 fused_ordering(970) 00:11:09.393 fused_ordering(971) 00:11:09.393 fused_ordering(972) 00:11:09.393 fused_ordering(973) 00:11:09.393 fused_ordering(974) 00:11:09.393 fused_ordering(975) 00:11:09.393 fused_ordering(976) 00:11:09.393 fused_ordering(977) 00:11:09.393 fused_ordering(978) 00:11:09.393 fused_ordering(979) 00:11:09.393 fused_ordering(980) 00:11:09.393 fused_ordering(981) 00:11:09.393 fused_ordering(982) 00:11:09.393 fused_ordering(983) 00:11:09.393 fused_ordering(984) 00:11:09.393 fused_ordering(985) 00:11:09.393 fused_ordering(986) 00:11:09.393 fused_ordering(987) 00:11:09.393 fused_ordering(988) 00:11:09.393 fused_ordering(989) 00:11:09.393 fused_ordering(990) 00:11:09.393 fused_ordering(991) 00:11:09.393 fused_ordering(992) 00:11:09.393 fused_ordering(993) 00:11:09.393 fused_ordering(994) 00:11:09.393 fused_ordering(995) 00:11:09.393 fused_ordering(996) 00:11:09.393 fused_ordering(997) 00:11:09.393 fused_ordering(998) 00:11:09.393 fused_ordering(999) 00:11:09.393 fused_ordering(1000) 00:11:09.393 fused_ordering(1001) 00:11:09.393 fused_ordering(1002) 00:11:09.393 fused_ordering(1003) 00:11:09.393 fused_ordering(1004) 00:11:09.393 fused_ordering(1005) 00:11:09.393 fused_ordering(1006) 00:11:09.393 fused_ordering(1007) 00:11:09.393 fused_ordering(1008) 00:11:09.393 fused_ordering(1009) 00:11:09.393 fused_ordering(1010) 00:11:09.393 fused_ordering(1011) 00:11:09.393 fused_ordering(1012) 00:11:09.393 fused_ordering(1013) 00:11:09.393 fused_ordering(1014) 00:11:09.393 fused_ordering(1015) 00:11:09.393 fused_ordering(1016) 00:11:09.393 fused_ordering(1017) 00:11:09.393 fused_ordering(1018) 00:11:09.393 fused_ordering(1019) 00:11:09.393 fused_ordering(1020) 00:11:09.393 fused_ordering(1021) 00:11:09.393 fused_ordering(1022) 00:11:09.393 fused_ordering(1023) 00:11:09.393 14:16:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:09.393 14:16:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:09.393 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:09.393 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:09.393 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:09.393 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:09.393 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:09.393 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:09.393 rmmod nvme_tcp 00:11:09.393 rmmod nvme_fabrics 00:11:09.393 rmmod nvme_keyring 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2451444 ']' 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2451444 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2451444 ']' 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2451444 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2451444 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2451444' 00:11:09.653 killing process with pid 2451444 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2451444 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2451444 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.653 14:16:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.190 14:16:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:12.190 00:11:12.190 real 0m10.818s 00:11:12.190 user 0m5.467s 00:11:12.190 sys 0m5.604s 00:11:12.190 14:16:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:12.190 14:16:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:12.190 ************************************ 00:11:12.190 END TEST nvmf_fused_ordering 00:11:12.190 ************************************ 00:11:12.190 14:16:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:12.190 14:16:03 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:12.190 14:16:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:12.190 14:16:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.190 14:16:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:12.190 ************************************ 00:11:12.190 START TEST nvmf_delete_subsystem 00:11:12.190 ************************************ 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:12.190 * Looking for test storage... 00:11:12.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.190 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.191 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:12.191 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:12.191 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:12.191 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:12.191 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:12.191 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.191 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:12.191 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:12.191 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:12.191 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.191 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.191 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.191 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:12.191 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:12.191 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:12.191 14:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.467 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:17.468 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:17.468 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:17.468 Found net devices under 0000:86:00.0: cvl_0_0 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:17.468 Found net devices under 0000:86:00.1: cvl_0_1 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:17.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:11:17.468 00:11:17.468 --- 10.0.0.2 ping statistics --- 00:11:17.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.468 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:11:17.468 00:11:17.468 --- 10.0.0.1 ping statistics --- 00:11:17.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.468 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2455437 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2455437 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2455437 ']' 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:17.468 14:16:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.468 [2024-07-12 14:16:09.323511] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:11:17.468 [2024-07-12 14:16:09.323553] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.468 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.468 [2024-07-12 14:16:09.381276] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:17.468 [2024-07-12 14:16:09.460012] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.468 [2024-07-12 14:16:09.460048] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.468 [2024-07-12 14:16:09.460055] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.468 [2024-07-12 14:16:09.460060] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.468 [2024-07-12 14:16:09.460065] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.468 [2024-07-12 14:16:09.460100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.468 [2024-07-12 14:16:09.460104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.401 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:18.401 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:18.401 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:18.401 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:18.401 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.401 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.401 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:18.401 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.401 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.401 [2024-07-12 14:16:10.179238] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.401 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.401 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:18.401 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.401 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.401 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.401 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.402 [2024-07-12 14:16:10.195364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.402 NULL1 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.402 Delay0 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2455661 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:18.402 14:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:18.402 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.402 [2024-07-12 14:16:10.269886] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:20.304 14:16:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.304 14:16:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.304 14:16:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 starting I/O failed: -6 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 starting I/O failed: -6 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 [2024-07-12 14:16:12.390690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f647800cfe0 is same with the state(5) to be set 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.563 Read completed with error (sct=0, sc=8) 00:11:20.563 Write completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Write completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Write completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Write completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Write completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Write completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Write completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Write completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Write completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Write completed with error (sct=0, sc=8) 00:11:20.564 Write completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 Read completed with error (sct=0, sc=8) 00:11:20.564 [2024-07-12 14:16:12.391175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f647800d600 is same with the state(5) to be set 00:11:21.500 [2024-07-12 14:16:13.364925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408ac0 is same with the state(5) to be set 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 [2024-07-12 14:16:13.392880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24073e0 is same with the state(5) to be set 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 [2024-07-12 14:16:13.393099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f647800d2f0 is same with the state(5) to be set 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 [2024-07-12 14:16:13.394087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24077a0 is same with the state(5) to be set 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 Write completed with error (sct=0, sc=8) 00:11:21.500 Read completed with error (sct=0, sc=8) 00:11:21.500 [2024-07-12 14:16:13.394262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2407000 is same with the state(5) to be set 00:11:21.500 Initializing NVMe Controllers 00:11:21.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:21.500 Controller IO queue size 128, less than required. 00:11:21.500 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:21.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:21.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:21.500 Initialization complete. Launching workers. 00:11:21.500 ======================================================== 00:11:21.500 Latency(us) 00:11:21.500 Device Information : IOPS MiB/s Average min max 00:11:21.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.17 0.09 948436.36 508.92 1012594.14 00:11:21.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.94 0.08 877515.07 227.60 1010673.33 00:11:21.500 ======================================================== 00:11:21.500 Total : 349.11 0.17 916960.62 227.60 1012594.14 00:11:21.500 00:11:21.500 [2024-07-12 14:16:13.394791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2408ac0 (9): Bad file descriptor 00:11:21.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:21.500 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.500 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:21.500 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2455661 00:11:21.500 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:22.067 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:22.067 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2455661 00:11:22.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2455661) - No such process 00:11:22.067 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2455661 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2455661 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2455661 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.068 [2024-07-12 14:16:13.925553] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2456158 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2456158 00:11:22.068 14:16:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:22.068 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.068 [2024-07-12 14:16:13.991209] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:22.635 14:16:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:22.635 14:16:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2456158 00:11:22.635 14:16:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.206 14:16:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.206 14:16:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2456158 00:11:23.206 14:16:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.464 14:16:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.464 14:16:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2456158 00:11:23.464 14:16:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:24.032 14:16:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:24.032 14:16:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2456158 00:11:24.032 14:16:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:24.600 14:16:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:24.600 14:16:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2456158 00:11:24.600 14:16:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.167 14:16:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.167 14:16:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2456158 00:11:25.167 14:16:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.167 Initializing NVMe Controllers 00:11:25.167 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:25.167 Controller IO queue size 128, less than required. 00:11:25.167 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:25.167 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:25.167 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:25.167 Initialization complete. Launching workers. 00:11:25.167 ======================================================== 00:11:25.167 Latency(us) 00:11:25.167 Device Information : IOPS MiB/s Average min max 00:11:25.167 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002377.59 1000132.71 1008900.92 00:11:25.167 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004627.98 1000284.52 1010261.23 00:11:25.167 ======================================================== 00:11:25.167 Total : 256.00 0.12 1003502.79 1000132.71 1010261.23 00:11:25.167 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2456158 00:11:25.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2456158) - No such process 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2456158 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:25.736 rmmod nvme_tcp 00:11:25.736 rmmod nvme_fabrics 00:11:25.736 rmmod nvme_keyring 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2455437 ']' 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2455437 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2455437 ']' 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2455437 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2455437 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2455437' 00:11:25.736 killing process with pid 2455437 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2455437 00:11:25.736 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2455437 00:11:25.995 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:25.995 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:25.995 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:25.995 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:25.995 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:25.995 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.995 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:25.995 14:16:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.940 14:16:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:27.940 00:11:27.940 real 0m16.048s 00:11:27.940 user 0m30.277s 00:11:27.940 sys 0m4.801s 00:11:27.940 14:16:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:27.940 14:16:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.940 ************************************ 00:11:27.940 END TEST nvmf_delete_subsystem 00:11:27.940 ************************************ 00:11:27.940 14:16:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:27.940 14:16:19 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:27.940 14:16:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:27.940 14:16:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.940 14:16:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:27.940 ************************************ 00:11:27.940 START TEST nvmf_ns_masking 00:11:27.940 ************************************ 00:11:27.940 14:16:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:28.199 * Looking for test storage... 00:11:28.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.199 14:16:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.199 14:16:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:28.199 14:16:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.199 14:16:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.199 14:16:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.199 14:16:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.199 14:16:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.199 14:16:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.199 14:16:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.199 14:16:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.199 14:16:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.199 14:16:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=287dff16-0d84-498e-bd0e-cb94c8da1aec 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=cf6d0f00-8781-4d6f-9187-7fddffe16a6a 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=610ffcbd-bc96-4d76-bd30-2afd4b18f7a0 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:28.199 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.200 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:28.200 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:28.200 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:28.200 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.200 14:16:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.200 14:16:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.200 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:28.200 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:28.200 14:16:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:28.200 14:16:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:33.467 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.467 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:33.467 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:33.467 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:33.467 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:33.467 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:33.467 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:33.467 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:33.467 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:33.467 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:33.467 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:33.467 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:33.468 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:33.468 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:33.468 Found net devices under 0000:86:00.0: cvl_0_0 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:33.468 Found net devices under 0000:86:00.1: cvl_0_1 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:33.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:11:33.468 00:11:33.468 --- 10.0.0.2 ping statistics --- 00:11:33.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.468 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:11:33.468 00:11:33.468 --- 10.0.0.1 ping statistics --- 00:11:33.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.468 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2460311 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2460311 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2460311 ']' 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:33.468 14:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:33.468 [2024-07-12 14:16:25.427566] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:11:33.469 [2024-07-12 14:16:25.427611] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.469 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.727 [2024-07-12 14:16:25.484531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.727 [2024-07-12 14:16:25.556368] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.727 [2024-07-12 14:16:25.556411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.727 [2024-07-12 14:16:25.556418] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.727 [2024-07-12 14:16:25.556423] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.727 [2024-07-12 14:16:25.556428] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.727 [2024-07-12 14:16:25.556445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.293 14:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:34.293 14:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:34.293 14:16:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:34.293 14:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:34.293 14:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:34.293 14:16:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.293 14:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:34.551 [2024-07-12 14:16:26.412453] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.551 14:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:34.551 14:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:34.551 14:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:34.809 Malloc1 00:11:34.809 14:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:34.809 Malloc2 00:11:35.067 14:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:35.067 14:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:35.326 14:16:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.326 [2024-07-12 14:16:27.312077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.326 14:16:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:35.326 14:16:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 610ffcbd-bc96-4d76-bd30-2afd4b18f7a0 -a 10.0.0.2 -s 4420 -i 4 00:11:35.673 14:16:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:35.673 14:16:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:35.673 14:16:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:35.673 14:16:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:35.673 14:16:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:37.575 14:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:37.575 14:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:37.575 14:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:37.575 14:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:37.575 14:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:37.575 14:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:37.575 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:37.575 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:37.833 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:37.833 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:37.833 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:37.833 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:37.833 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:37.833 [ 0]:0x1 00:11:37.833 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:37.833 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:37.833 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d332806a19294797ba057910b83e3a28 00:11:37.833 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d332806a19294797ba057910b83e3a28 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:37.833 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:38.092 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:38.092 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.092 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:38.092 [ 0]:0x1 00:11:38.092 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:38.092 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.092 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d332806a19294797ba057910b83e3a28 00:11:38.092 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d332806a19294797ba057910b83e3a28 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.092 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:38.092 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.092 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:38.092 [ 1]:0x2 00:11:38.092 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.092 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:38.092 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=33968bf1b07b43eebc58bc4e58c6adec 00:11:38.092 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 33968bf1b07b43eebc58bc4e58c6adec != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.092 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:38.092 14:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.092 14:16:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.351 14:16:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:38.610 14:16:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:38.610 14:16:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 610ffcbd-bc96-4d76-bd30-2afd4b18f7a0 -a 10.0.0.2 -s 4420 -i 4 00:11:38.610 14:16:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:38.610 14:16:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:38.610 14:16:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:38.610 14:16:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:38.610 14:16:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:38.610 14:16:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:41.143 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:41.143 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:41.143 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:41.143 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:41.143 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:41.143 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:41.143 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:41.143 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:41.143 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:41.143 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:41.143 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:41.143 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:41.143 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:41.144 [ 0]:0x2 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=33968bf1b07b43eebc58bc4e58c6adec 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 33968bf1b07b43eebc58bc4e58c6adec != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:41.144 [ 0]:0x1 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d332806a19294797ba057910b83e3a28 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d332806a19294797ba057910b83e3a28 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:41.144 [ 1]:0x2 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=33968bf1b07b43eebc58bc4e58c6adec 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 33968bf1b07b43eebc58bc4e58c6adec != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.144 14:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:41.144 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:41.144 14:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:41.144 14:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:41.144 14:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:41.144 14:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:41.144 14:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:41.144 14:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:41.144 14:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:41.144 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.144 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:41.144 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:41.144 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.402 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:41.403 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.403 14:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:41.403 14:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:41.403 14:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:41.403 14:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:41.403 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:41.403 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.403 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:41.403 [ 0]:0x2 00:11:41.403 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.403 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:41.403 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=33968bf1b07b43eebc58bc4e58c6adec 00:11:41.403 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 33968bf1b07b43eebc58bc4e58c6adec != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.403 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:41.403 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.403 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:41.661 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:41.661 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 610ffcbd-bc96-4d76-bd30-2afd4b18f7a0 -a 10.0.0.2 -s 4420 -i 4 00:11:41.661 14:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:41.661 14:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:41.661 14:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.661 14:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:41.661 14:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:41.661 14:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:43.562 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:43.820 [ 0]:0x1 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d332806a19294797ba057910b83e3a28 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d332806a19294797ba057910b83e3a28 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:43.820 [ 1]:0x2 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=33968bf1b07b43eebc58bc4e58c6adec 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 33968bf1b07b43eebc58bc4e58c6adec != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.820 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:44.079 [ 0]:0x2 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=33968bf1b07b43eebc58bc4e58c6adec 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 33968bf1b07b43eebc58bc4e58c6adec != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:44.079 14:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:44.079 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.079 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:44.079 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.079 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:44.079 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.079 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:44.079 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:44.079 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:44.338 [2024-07-12 14:16:36.157563] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:44.338 request: 00:11:44.338 { 00:11:44.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:44.338 "nsid": 2, 00:11:44.338 "host": "nqn.2016-06.io.spdk:host1", 00:11:44.338 "method": "nvmf_ns_remove_host", 00:11:44.338 "req_id": 1 00:11:44.338 } 00:11:44.338 Got JSON-RPC error response 00:11:44.338 response: 00:11:44.338 { 00:11:44.338 "code": -32602, 00:11:44.338 "message": "Invalid parameters" 00:11:44.338 } 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:44.338 [ 0]:0x2 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=33968bf1b07b43eebc58bc4e58c6adec 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 33968bf1b07b43eebc58bc4e58c6adec != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:44.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2462233 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2462233 /var/tmp/host.sock 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2462233 ']' 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:44.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:44.338 14:16:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:44.597 [2024-07-12 14:16:36.359643] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:11:44.597 [2024-07-12 14:16:36.359687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2462233 ] 00:11:44.597 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.597 [2024-07-12 14:16:36.414268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.597 [2024-07-12 14:16:36.493603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.165 14:16:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.165 14:16:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:45.165 14:16:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.423 14:16:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:45.682 14:16:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 287dff16-0d84-498e-bd0e-cb94c8da1aec 00:11:45.682 14:16:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:45.682 14:16:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 287DFF160D84498EBD0ECB94C8DA1AEC -i 00:11:45.939 14:16:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid cf6d0f00-8781-4d6f-9187-7fddffe16a6a 00:11:45.939 14:16:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:45.939 14:16:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g CF6D0F0087814D6F91877FDDFFE16A6A -i 00:11:45.939 14:16:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:46.196 14:16:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:46.453 14:16:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:46.453 14:16:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:46.711 nvme0n1 00:11:46.711 14:16:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:46.711 14:16:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:47.278 nvme1n2 00:11:47.278 14:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:47.278 14:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:47.278 14:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:47.278 14:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:47.278 14:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:47.278 14:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:47.278 14:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:47.278 14:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:47.278 14:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:47.536 14:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 287dff16-0d84-498e-bd0e-cb94c8da1aec == \2\8\7\d\f\f\1\6\-\0\d\8\4\-\4\9\8\e\-\b\d\0\e\-\c\b\9\4\c\8\d\a\1\a\e\c ]] 00:11:47.536 14:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:47.536 14:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:47.536 14:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:47.794 14:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ cf6d0f00-8781-4d6f-9187-7fddffe16a6a == \c\f\6\d\0\f\0\0\-\8\7\8\1\-\4\d\6\f\-\9\1\8\7\-\7\f\d\d\f\f\e\1\6\a\6\a ]] 00:11:47.794 14:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2462233 00:11:47.794 14:16:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2462233 ']' 00:11:47.794 14:16:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2462233 00:11:47.794 14:16:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:47.794 14:16:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:47.794 14:16:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2462233 00:11:47.794 14:16:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:47.794 14:16:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:47.794 14:16:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2462233' 00:11:47.794 killing process with pid 2462233 00:11:47.794 14:16:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2462233 00:11:47.794 14:16:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2462233 00:11:48.053 14:16:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:48.313 rmmod nvme_tcp 00:11:48.313 rmmod nvme_fabrics 00:11:48.313 rmmod nvme_keyring 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2460311 ']' 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2460311 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2460311 ']' 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2460311 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2460311 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2460311' 00:11:48.313 killing process with pid 2460311 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2460311 00:11:48.313 14:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2460311 00:11:48.572 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:48.572 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:48.572 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:48.572 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:48.572 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:48.572 14:16:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.572 14:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.572 14:16:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.474 14:16:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:50.474 00:11:50.474 real 0m22.566s 00:11:50.474 user 0m24.527s 00:11:50.474 sys 0m6.067s 00:11:50.474 14:16:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:50.474 14:16:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:50.474 ************************************ 00:11:50.474 END TEST nvmf_ns_masking 00:11:50.474 ************************************ 00:11:50.734 14:16:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:50.734 14:16:42 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:50.734 14:16:42 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:50.734 14:16:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:50.734 14:16:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.734 14:16:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:50.734 ************************************ 00:11:50.734 START TEST nvmf_nvme_cli 00:11:50.734 ************************************ 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:50.734 * Looking for test storage... 00:11:50.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:50.734 14:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:56.008 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:56.008 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:56.008 Found net devices under 0000:86:00.0: cvl_0_0 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:56.008 Found net devices under 0000:86:00.1: cvl_0_1 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.008 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:56.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:11:56.009 00:11:56.009 --- 10.0.0.2 ping statistics --- 00:11:56.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.009 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:11:56.009 00:11:56.009 --- 10.0.0.1 ping statistics --- 00:11:56.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.009 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2466389 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2466389 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2466389 ']' 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:56.009 14:16:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.009 [2024-07-12 14:16:47.867192] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:11:56.009 [2024-07-12 14:16:47.867236] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.009 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.009 [2024-07-12 14:16:47.924645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.009 [2024-07-12 14:16:48.005947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.009 [2024-07-12 14:16:48.005982] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.009 [2024-07-12 14:16:48.005989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.009 [2024-07-12 14:16:48.005994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.009 [2024-07-12 14:16:48.005999] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.009 [2024-07-12 14:16:48.006061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.009 [2024-07-12 14:16:48.006078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.009 [2024-07-12 14:16:48.006167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.009 [2024-07-12 14:16:48.006168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.949 [2024-07-12 14:16:48.722411] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.949 Malloc0 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.949 Malloc1 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.949 [2024-07-12 14:16:48.803893] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:56.949 00:11:56.949 Discovery Log Number of Records 2, Generation counter 2 00:11:56.949 =====Discovery Log Entry 0====== 00:11:56.949 trtype: tcp 00:11:56.949 adrfam: ipv4 00:11:56.949 subtype: current discovery subsystem 00:11:56.949 treq: not required 00:11:56.949 portid: 0 00:11:56.949 trsvcid: 4420 00:11:56.949 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:56.949 traddr: 10.0.0.2 00:11:56.949 eflags: explicit discovery connections, duplicate discovery information 00:11:56.949 sectype: none 00:11:56.949 =====Discovery Log Entry 1====== 00:11:56.949 trtype: tcp 00:11:56.949 adrfam: ipv4 00:11:56.949 subtype: nvme subsystem 00:11:56.949 treq: not required 00:11:56.949 portid: 0 00:11:56.949 trsvcid: 4420 00:11:56.949 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:56.949 traddr: 10.0.0.2 00:11:56.949 eflags: none 00:11:56.949 sectype: none 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:56.949 14:16:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.332 14:16:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:58.333 14:16:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:58.333 14:16:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.333 14:16:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:58.333 14:16:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:58.333 14:16:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:00.292 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.293 14:16:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:00.293 /dev/nvme0n1 ]] 00:12:00.293 14:16:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:00.293 14:16:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:00.293 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:00.293 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.293 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:00.551 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:00.551 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.551 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:00.551 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.551 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:00.551 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:00.551 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.551 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:00.551 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:00.551 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.551 14:16:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:00.551 14:16:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:00.810 rmmod nvme_tcp 00:12:00.810 rmmod nvme_fabrics 00:12:00.810 rmmod nvme_keyring 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2466389 ']' 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2466389 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2466389 ']' 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2466389 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:00.810 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2466389 00:12:01.069 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:01.069 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:01.069 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2466389' 00:12:01.069 killing process with pid 2466389 00:12:01.069 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2466389 00:12:01.069 14:16:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2466389 00:12:01.069 14:16:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:01.069 14:16:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:01.069 14:16:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:01.069 14:16:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:01.069 14:16:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:01.069 14:16:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.069 14:16:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.069 14:16:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.604 14:16:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:03.604 00:12:03.604 real 0m12.600s 00:12:03.604 user 0m21.666s 00:12:03.604 sys 0m4.457s 00:12:03.604 14:16:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:03.604 14:16:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.604 ************************************ 00:12:03.604 END TEST nvmf_nvme_cli 00:12:03.604 ************************************ 00:12:03.604 14:16:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:03.604 14:16:55 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:03.604 14:16:55 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:03.604 14:16:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:03.604 14:16:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.604 14:16:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:03.604 ************************************ 00:12:03.604 START TEST nvmf_vfio_user 00:12:03.604 ************************************ 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:03.604 * Looking for test storage... 00:12:03.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2467676 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2467676' 00:12:03.604 Process pid: 2467676 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2467676 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2467676 ']' 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.604 14:16:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:03.604 [2024-07-12 14:16:55.383170] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:12:03.605 [2024-07-12 14:16:55.383213] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.605 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.605 [2024-07-12 14:16:55.437595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.605 [2024-07-12 14:16:55.517920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.605 [2024-07-12 14:16:55.517959] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.605 [2024-07-12 14:16:55.517966] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.605 [2024-07-12 14:16:55.517972] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.605 [2024-07-12 14:16:55.517977] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.605 [2024-07-12 14:16:55.518018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.605 [2024-07-12 14:16:55.518113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.605 [2024-07-12 14:16:55.518198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.605 [2024-07-12 14:16:55.518199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.541 14:16:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:04.541 14:16:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:04.541 14:16:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:05.476 14:16:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:05.476 14:16:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:05.476 14:16:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:05.476 14:16:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:05.476 14:16:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:05.477 14:16:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:05.735 Malloc1 00:12:05.735 14:16:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:05.993 14:16:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:05.993 14:16:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:06.250 14:16:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:06.250 14:16:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:06.250 14:16:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:06.509 Malloc2 00:12:06.509 14:16:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:06.509 14:16:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:06.767 14:16:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:07.026 14:16:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:07.026 14:16:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:07.026 14:16:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:07.027 14:16:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:07.027 14:16:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:07.027 14:16:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:07.027 [2024-07-12 14:16:58.922550] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:12:07.027 [2024-07-12 14:16:58.922595] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2468378 ] 00:12:07.027 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.027 [2024-07-12 14:16:58.951902] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:07.027 [2024-07-12 14:16:58.961753] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:07.027 [2024-07-12 14:16:58.961773] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7febf7adf000 00:12:07.027 [2024-07-12 14:16:58.962750] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.027 [2024-07-12 14:16:58.963752] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.027 [2024-07-12 14:16:58.964757] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.027 [2024-07-12 14:16:58.965760] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:07.027 [2024-07-12 14:16:58.966767] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:07.027 [2024-07-12 14:16:58.967778] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.027 [2024-07-12 14:16:58.968783] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:07.027 [2024-07-12 14:16:58.969794] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.027 [2024-07-12 14:16:58.970798] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:07.027 [2024-07-12 14:16:58.970807] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7febf7ad4000 00:12:07.027 [2024-07-12 14:16:58.971748] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:07.027 [2024-07-12 14:16:58.984359] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:07.027 [2024-07-12 14:16:58.984384] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:07.027 [2024-07-12 14:16:58.986907] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:07.027 [2024-07-12 14:16:58.986941] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:07.027 [2024-07-12 14:16:58.987015] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:07.027 [2024-07-12 14:16:58.987032] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:07.027 [2024-07-12 14:16:58.987037] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:07.027 [2024-07-12 14:16:58.987902] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:07.027 [2024-07-12 14:16:58.987914] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:07.027 [2024-07-12 14:16:58.987920] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:07.027 [2024-07-12 14:16:58.988910] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:07.027 [2024-07-12 14:16:58.988918] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:07.027 [2024-07-12 14:16:58.988925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:07.027 [2024-07-12 14:16:58.989912] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:07.027 [2024-07-12 14:16:58.989920] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:07.027 [2024-07-12 14:16:58.990914] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:07.027 [2024-07-12 14:16:58.990921] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:07.027 [2024-07-12 14:16:58.990925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:07.027 [2024-07-12 14:16:58.990931] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:07.027 [2024-07-12 14:16:58.991036] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:07.027 [2024-07-12 14:16:58.991040] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:07.027 [2024-07-12 14:16:58.991044] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:07.027 [2024-07-12 14:16:58.991926] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:07.027 [2024-07-12 14:16:58.992930] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:07.027 [2024-07-12 14:16:58.993937] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:07.027 [2024-07-12 14:16:58.994939] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:07.027 [2024-07-12 14:16:58.995002] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:07.027 [2024-07-12 14:16:58.995957] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:07.027 [2024-07-12 14:16:58.995965] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:07.027 [2024-07-12 14:16:58.995969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:07.027 [2024-07-12 14:16:58.995985] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:07.027 [2024-07-12 14:16:58.995993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:07.027 [2024-07-12 14:16:58.996008] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:07.027 [2024-07-12 14:16:58.996012] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.027 [2024-07-12 14:16:58.996025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.027 [2024-07-12 14:16:58.996059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:07.027 [2024-07-12 14:16:58.996067] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:07.027 [2024-07-12 14:16:58.996075] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:07.027 [2024-07-12 14:16:58.996079] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:07.027 [2024-07-12 14:16:58.996083] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:07.027 [2024-07-12 14:16:58.996087] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:07.027 [2024-07-12 14:16:58.996091] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:07.027 [2024-07-12 14:16:58.996095] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:07.027 [2024-07-12 14:16:58.996101] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:07.027 [2024-07-12 14:16:58.996110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:07.027 [2024-07-12 14:16:58.996120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:07.027 [2024-07-12 14:16:58.996131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.027 [2024-07-12 14:16:58.996139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.027 [2024-07-12 14:16:58.996146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.027 [2024-07-12 14:16:58.996154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.027 [2024-07-12 14:16:58.996158] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:07.027 [2024-07-12 14:16:58.996165] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:07.027 [2024-07-12 14:16:58.996173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:07.027 [2024-07-12 14:16:58.996182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:07.027 [2024-07-12 14:16:58.996187] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:07.027 [2024-07-12 14:16:58.996191] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:07.027 [2024-07-12 14:16:58.996197] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:07.027 [2024-07-12 14:16:58.996202] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:07.027 [2024-07-12 14:16:58.996212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:07.027 [2024-07-12 14:16:58.996221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:07.027 [2024-07-12 14:16:58.996270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:07.027 [2024-07-12 14:16:58.996276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:07.027 [2024-07-12 14:16:58.996283] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:07.027 [2024-07-12 14:16:58.996287] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:07.027 [2024-07-12 14:16:58.996292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:07.028 [2024-07-12 14:16:58.996303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:07.028 [2024-07-12 14:16:58.996312] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:07.028 [2024-07-12 14:16:58.996323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:07.028 [2024-07-12 14:16:58.996329] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:07.028 [2024-07-12 14:16:58.996335] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:07.028 [2024-07-12 14:16:58.996339] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.028 [2024-07-12 14:16:58.996345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.028 [2024-07-12 14:16:58.996362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:07.028 [2024-07-12 14:16:58.996374] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:07.028 [2024-07-12 14:16:58.996386] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:07.028 [2024-07-12 14:16:58.996392] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:07.028 [2024-07-12 14:16:58.996395] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.028 [2024-07-12 14:16:58.996401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.028 [2024-07-12 14:16:58.996415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:07.028 [2024-07-12 14:16:58.996422] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:07.028 [2024-07-12 14:16:58.996428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:07.028 [2024-07-12 14:16:58.996435] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:07.028 [2024-07-12 14:16:58.996440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:07.028 [2024-07-12 14:16:58.996446] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:07.028 [2024-07-12 14:16:58.996451] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:07.028 [2024-07-12 14:16:58.996455] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:07.028 [2024-07-12 14:16:58.996459] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:07.028 [2024-07-12 14:16:58.996463] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:07.028 [2024-07-12 14:16:58.996479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:07.028 [2024-07-12 14:16:58.996488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:07.028 [2024-07-12 14:16:58.996498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:07.028 [2024-07-12 14:16:58.996508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:07.028 [2024-07-12 14:16:58.996518] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:07.028 [2024-07-12 14:16:58.996529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:07.028 [2024-07-12 14:16:58.996539] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:07.028 [2024-07-12 14:16:58.996549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:07.028 [2024-07-12 14:16:58.996560] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:07.028 [2024-07-12 14:16:58.996565] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:07.028 [2024-07-12 14:16:58.996568] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:07.028 [2024-07-12 14:16:58.996571] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:07.028 [2024-07-12 14:16:58.996576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:07.028 [2024-07-12 14:16:58.996583] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:07.028 [2024-07-12 14:16:58.996586] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:07.028 [2024-07-12 14:16:58.996592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:07.028 [2024-07-12 14:16:58.996598] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:07.028 [2024-07-12 14:16:58.996602] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.028 [2024-07-12 14:16:58.996607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.028 [2024-07-12 14:16:58.996613] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:07.028 [2024-07-12 14:16:58.996617] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:07.028 [2024-07-12 14:16:58.996622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:07.028 [2024-07-12 14:16:58.996630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:07.028 [2024-07-12 14:16:58.996640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:07.028 [2024-07-12 14:16:58.996650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:07.028 [2024-07-12 14:16:58.996656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:07.028 ===================================================== 00:12:07.028 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:07.028 ===================================================== 00:12:07.028 Controller Capabilities/Features 00:12:07.028 ================================ 00:12:07.028 Vendor ID: 4e58 00:12:07.028 Subsystem Vendor ID: 4e58 00:12:07.028 Serial Number: SPDK1 00:12:07.028 Model Number: SPDK bdev Controller 00:12:07.028 Firmware Version: 24.09 00:12:07.028 Recommended Arb Burst: 6 00:12:07.028 IEEE OUI Identifier: 8d 6b 50 00:12:07.028 Multi-path I/O 00:12:07.028 May have multiple subsystem ports: Yes 00:12:07.028 May have multiple controllers: Yes 00:12:07.028 Associated with SR-IOV VF: No 00:12:07.028 Max Data Transfer Size: 131072 00:12:07.028 Max Number of Namespaces: 32 00:12:07.028 Max Number of I/O Queues: 127 00:12:07.028 NVMe Specification Version (VS): 1.3 00:12:07.028 NVMe Specification Version (Identify): 1.3 00:12:07.028 Maximum Queue Entries: 256 00:12:07.028 Contiguous Queues Required: Yes 00:12:07.028 Arbitration Mechanisms Supported 00:12:07.028 Weighted Round Robin: Not Supported 00:12:07.028 Vendor Specific: Not Supported 00:12:07.028 Reset Timeout: 15000 ms 00:12:07.028 Doorbell Stride: 4 bytes 00:12:07.028 NVM Subsystem Reset: Not Supported 00:12:07.028 Command Sets Supported 00:12:07.028 NVM Command Set: Supported 00:12:07.028 Boot Partition: Not Supported 00:12:07.028 Memory Page Size Minimum: 4096 bytes 00:12:07.028 Memory Page Size Maximum: 4096 bytes 00:12:07.028 Persistent Memory Region: Not Supported 00:12:07.028 Optional Asynchronous Events Supported 00:12:07.028 Namespace Attribute Notices: Supported 00:12:07.028 Firmware Activation Notices: Not Supported 00:12:07.028 ANA Change Notices: Not Supported 00:12:07.028 PLE Aggregate Log Change Notices: Not Supported 00:12:07.028 LBA Status Info Alert Notices: Not Supported 00:12:07.028 EGE Aggregate Log Change Notices: Not Supported 00:12:07.028 Normal NVM Subsystem Shutdown event: Not Supported 00:12:07.028 Zone Descriptor Change Notices: Not Supported 00:12:07.028 Discovery Log Change Notices: Not Supported 00:12:07.028 Controller Attributes 00:12:07.028 128-bit Host Identifier: Supported 00:12:07.028 Non-Operational Permissive Mode: Not Supported 00:12:07.028 NVM Sets: Not Supported 00:12:07.028 Read Recovery Levels: Not Supported 00:12:07.028 Endurance Groups: Not Supported 00:12:07.028 Predictable Latency Mode: Not Supported 00:12:07.028 Traffic Based Keep ALive: Not Supported 00:12:07.028 Namespace Granularity: Not Supported 00:12:07.028 SQ Associations: Not Supported 00:12:07.028 UUID List: Not Supported 00:12:07.028 Multi-Domain Subsystem: Not Supported 00:12:07.028 Fixed Capacity Management: Not Supported 00:12:07.028 Variable Capacity Management: Not Supported 00:12:07.028 Delete Endurance Group: Not Supported 00:12:07.028 Delete NVM Set: Not Supported 00:12:07.028 Extended LBA Formats Supported: Not Supported 00:12:07.028 Flexible Data Placement Supported: Not Supported 00:12:07.028 00:12:07.028 Controller Memory Buffer Support 00:12:07.028 ================================ 00:12:07.028 Supported: No 00:12:07.028 00:12:07.028 Persistent Memory Region Support 00:12:07.028 ================================ 00:12:07.028 Supported: No 00:12:07.028 00:12:07.028 Admin Command Set Attributes 00:12:07.028 ============================ 00:12:07.028 Security Send/Receive: Not Supported 00:12:07.028 Format NVM: Not Supported 00:12:07.028 Firmware Activate/Download: Not Supported 00:12:07.028 Namespace Management: Not Supported 00:12:07.028 Device Self-Test: Not Supported 00:12:07.028 Directives: Not Supported 00:12:07.028 NVMe-MI: Not Supported 00:12:07.028 Virtualization Management: Not Supported 00:12:07.028 Doorbell Buffer Config: Not Supported 00:12:07.028 Get LBA Status Capability: Not Supported 00:12:07.028 Command & Feature Lockdown Capability: Not Supported 00:12:07.028 Abort Command Limit: 4 00:12:07.028 Async Event Request Limit: 4 00:12:07.028 Number of Firmware Slots: N/A 00:12:07.028 Firmware Slot 1 Read-Only: N/A 00:12:07.028 Firmware Activation Without Reset: N/A 00:12:07.028 Multiple Update Detection Support: N/A 00:12:07.029 Firmware Update Granularity: No Information Provided 00:12:07.029 Per-Namespace SMART Log: No 00:12:07.029 Asymmetric Namespace Access Log Page: Not Supported 00:12:07.029 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:07.029 Command Effects Log Page: Supported 00:12:07.029 Get Log Page Extended Data: Supported 00:12:07.029 Telemetry Log Pages: Not Supported 00:12:07.029 Persistent Event Log Pages: Not Supported 00:12:07.029 Supported Log Pages Log Page: May Support 00:12:07.029 Commands Supported & Effects Log Page: Not Supported 00:12:07.029 Feature Identifiers & Effects Log Page:May Support 00:12:07.029 NVMe-MI Commands & Effects Log Page: May Support 00:12:07.029 Data Area 4 for Telemetry Log: Not Supported 00:12:07.029 Error Log Page Entries Supported: 128 00:12:07.029 Keep Alive: Supported 00:12:07.029 Keep Alive Granularity: 10000 ms 00:12:07.029 00:12:07.029 NVM Command Set Attributes 00:12:07.029 ========================== 00:12:07.029 Submission Queue Entry Size 00:12:07.029 Max: 64 00:12:07.029 Min: 64 00:12:07.029 Completion Queue Entry Size 00:12:07.029 Max: 16 00:12:07.029 Min: 16 00:12:07.029 Number of Namespaces: 32 00:12:07.029 Compare Command: Supported 00:12:07.029 Write Uncorrectable Command: Not Supported 00:12:07.029 Dataset Management Command: Supported 00:12:07.029 Write Zeroes Command: Supported 00:12:07.029 Set Features Save Field: Not Supported 00:12:07.029 Reservations: Not Supported 00:12:07.029 Timestamp: Not Supported 00:12:07.029 Copy: Supported 00:12:07.029 Volatile Write Cache: Present 00:12:07.029 Atomic Write Unit (Normal): 1 00:12:07.029 Atomic Write Unit (PFail): 1 00:12:07.029 Atomic Compare & Write Unit: 1 00:12:07.029 Fused Compare & Write: Supported 00:12:07.029 Scatter-Gather List 00:12:07.029 SGL Command Set: Supported (Dword aligned) 00:12:07.029 SGL Keyed: Not Supported 00:12:07.029 SGL Bit Bucket Descriptor: Not Supported 00:12:07.029 SGL Metadata Pointer: Not Supported 00:12:07.029 Oversized SGL: Not Supported 00:12:07.029 SGL Metadata Address: Not Supported 00:12:07.029 SGL Offset: Not Supported 00:12:07.029 Transport SGL Data Block: Not Supported 00:12:07.029 Replay Protected Memory Block: Not Supported 00:12:07.029 00:12:07.029 Firmware Slot Information 00:12:07.029 ========================= 00:12:07.029 Active slot: 1 00:12:07.029 Slot 1 Firmware Revision: 24.09 00:12:07.029 00:12:07.029 00:12:07.029 Commands Supported and Effects 00:12:07.029 ============================== 00:12:07.029 Admin Commands 00:12:07.029 -------------- 00:12:07.029 Get Log Page (02h): Supported 00:12:07.029 Identify (06h): Supported 00:12:07.029 Abort (08h): Supported 00:12:07.029 Set Features (09h): Supported 00:12:07.029 Get Features (0Ah): Supported 00:12:07.029 Asynchronous Event Request (0Ch): Supported 00:12:07.029 Keep Alive (18h): Supported 00:12:07.029 I/O Commands 00:12:07.029 ------------ 00:12:07.029 Flush (00h): Supported LBA-Change 00:12:07.029 Write (01h): Supported LBA-Change 00:12:07.029 Read (02h): Supported 00:12:07.029 Compare (05h): Supported 00:12:07.029 Write Zeroes (08h): Supported LBA-Change 00:12:07.029 Dataset Management (09h): Supported LBA-Change 00:12:07.029 Copy (19h): Supported LBA-Change 00:12:07.029 00:12:07.029 Error Log 00:12:07.029 ========= 00:12:07.029 00:12:07.029 Arbitration 00:12:07.029 =========== 00:12:07.029 Arbitration Burst: 1 00:12:07.029 00:12:07.029 Power Management 00:12:07.029 ================ 00:12:07.029 Number of Power States: 1 00:12:07.029 Current Power State: Power State #0 00:12:07.029 Power State #0: 00:12:07.029 Max Power: 0.00 W 00:12:07.029 Non-Operational State: Operational 00:12:07.029 Entry Latency: Not Reported 00:12:07.029 Exit Latency: Not Reported 00:12:07.029 Relative Read Throughput: 0 00:12:07.029 Relative Read Latency: 0 00:12:07.029 Relative Write Throughput: 0 00:12:07.029 Relative Write Latency: 0 00:12:07.029 Idle Power: Not Reported 00:12:07.029 Active Power: Not Reported 00:12:07.029 Non-Operational Permissive Mode: Not Supported 00:12:07.029 00:12:07.029 Health Information 00:12:07.029 ================== 00:12:07.029 Critical Warnings: 00:12:07.029 Available Spare Space: OK 00:12:07.029 Temperature: OK 00:12:07.029 Device Reliability: OK 00:12:07.029 Read Only: No 00:12:07.029 Volatile Memory Backup: OK 00:12:07.029 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:07.029 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:07.029 Available Spare: 0% 00:12:07.029 Available Sp[2024-07-12 14:16:58.996747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:07.029 [2024-07-12 14:16:58.996758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:07.029 [2024-07-12 14:16:58.996783] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:07.029 [2024-07-12 14:16:58.996791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.029 [2024-07-12 14:16:58.996797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.029 [2024-07-12 14:16:58.996802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.029 [2024-07-12 14:16:58.996807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.029 [2024-07-12 14:16:58.999386] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:07.029 [2024-07-12 14:16:58.999396] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:07.029 [2024-07-12 14:16:58.999978] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:07.029 [2024-07-12 14:16:59.000027] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:07.029 [2024-07-12 14:16:59.000032] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:07.029 [2024-07-12 14:16:59.000984] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:07.029 [2024-07-12 14:16:59.000994] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:07.029 [2024-07-12 14:16:59.001041] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:07.029 [2024-07-12 14:16:59.003019] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:07.289 are Threshold: 0% 00:12:07.289 Life Percentage Used: 0% 00:12:07.289 Data Units Read: 0 00:12:07.289 Data Units Written: 0 00:12:07.289 Host Read Commands: 0 00:12:07.289 Host Write Commands: 0 00:12:07.289 Controller Busy Time: 0 minutes 00:12:07.289 Power Cycles: 0 00:12:07.289 Power On Hours: 0 hours 00:12:07.289 Unsafe Shutdowns: 0 00:12:07.289 Unrecoverable Media Errors: 0 00:12:07.289 Lifetime Error Log Entries: 0 00:12:07.289 Warning Temperature Time: 0 minutes 00:12:07.289 Critical Temperature Time: 0 minutes 00:12:07.289 00:12:07.289 Number of Queues 00:12:07.289 ================ 00:12:07.289 Number of I/O Submission Queues: 127 00:12:07.289 Number of I/O Completion Queues: 127 00:12:07.289 00:12:07.289 Active Namespaces 00:12:07.289 ================= 00:12:07.289 Namespace ID:1 00:12:07.289 Error Recovery Timeout: Unlimited 00:12:07.289 Command Set Identifier: NVM (00h) 00:12:07.289 Deallocate: Supported 00:12:07.289 Deallocated/Unwritten Error: Not Supported 00:12:07.289 Deallocated Read Value: Unknown 00:12:07.289 Deallocate in Write Zeroes: Not Supported 00:12:07.289 Deallocated Guard Field: 0xFFFF 00:12:07.289 Flush: Supported 00:12:07.289 Reservation: Supported 00:12:07.289 Namespace Sharing Capabilities: Multiple Controllers 00:12:07.289 Size (in LBAs): 131072 (0GiB) 00:12:07.289 Capacity (in LBAs): 131072 (0GiB) 00:12:07.289 Utilization (in LBAs): 131072 (0GiB) 00:12:07.289 NGUID: E4620C97A459475FA6F3A0F37B46E4FD 00:12:07.289 UUID: e4620c97-a459-475f-a6f3-a0f37b46e4fd 00:12:07.289 Thin Provisioning: Not Supported 00:12:07.289 Per-NS Atomic Units: Yes 00:12:07.289 Atomic Boundary Size (Normal): 0 00:12:07.289 Atomic Boundary Size (PFail): 0 00:12:07.289 Atomic Boundary Offset: 0 00:12:07.289 Maximum Single Source Range Length: 65535 00:12:07.289 Maximum Copy Length: 65535 00:12:07.289 Maximum Source Range Count: 1 00:12:07.289 NGUID/EUI64 Never Reused: No 00:12:07.289 Namespace Write Protected: No 00:12:07.289 Number of LBA Formats: 1 00:12:07.289 Current LBA Format: LBA Format #00 00:12:07.289 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:07.289 00:12:07.289 14:16:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:07.289 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.289 [2024-07-12 14:16:59.217145] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:12.563 Initializing NVMe Controllers 00:12:12.563 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:12.563 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:12.563 Initialization complete. Launching workers. 00:12:12.563 ======================================================== 00:12:12.563 Latency(us) 00:12:12.563 Device Information : IOPS MiB/s Average min max 00:12:12.563 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39905.88 155.88 3207.12 948.82 7390.06 00:12:12.563 ======================================================== 00:12:12.563 Total : 39905.88 155.88 3207.12 948.82 7390.06 00:12:12.563 00:12:12.563 [2024-07-12 14:17:04.235441] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:12.563 14:17:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:12.563 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.563 [2024-07-12 14:17:04.464500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:17.890 Initializing NVMe Controllers 00:12:17.890 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:17.890 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:17.890 Initialization complete. Launching workers. 00:12:17.890 ======================================================== 00:12:17.890 Latency(us) 00:12:17.890 Device Information : IOPS MiB/s Average min max 00:12:17.890 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16042.46 62.67 7978.15 6163.64 8968.79 00:12:17.890 ======================================================== 00:12:17.890 Total : 16042.46 62.67 7978.15 6163.64 8968.79 00:12:17.890 00:12:17.890 [2024-07-12 14:17:09.498908] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:17.890 14:17:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:17.890 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.890 [2024-07-12 14:17:09.693860] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:23.164 [2024-07-12 14:17:14.805896] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:23.164 Initializing NVMe Controllers 00:12:23.164 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:23.164 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:23.164 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:23.164 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:23.164 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:23.164 Initialization complete. Launching workers. 00:12:23.164 Starting thread on core 2 00:12:23.164 Starting thread on core 3 00:12:23.164 Starting thread on core 1 00:12:23.164 14:17:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:23.164 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.164 [2024-07-12 14:17:15.088753] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:26.450 [2024-07-12 14:17:18.148569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:26.450 Initializing NVMe Controllers 00:12:26.450 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:26.450 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:26.450 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:26.450 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:26.450 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:26.450 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:26.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:26.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:26.450 Initialization complete. Launching workers. 00:12:26.450 Starting thread on core 1 with urgent priority queue 00:12:26.450 Starting thread on core 2 with urgent priority queue 00:12:26.450 Starting thread on core 3 with urgent priority queue 00:12:26.450 Starting thread on core 0 with urgent priority queue 00:12:26.450 SPDK bdev Controller (SPDK1 ) core 0: 10078.67 IO/s 9.92 secs/100000 ios 00:12:26.450 SPDK bdev Controller (SPDK1 ) core 1: 9141.67 IO/s 10.94 secs/100000 ios 00:12:26.450 SPDK bdev Controller (SPDK1 ) core 2: 8659.33 IO/s 11.55 secs/100000 ios 00:12:26.450 SPDK bdev Controller (SPDK1 ) core 3: 6652.00 IO/s 15.03 secs/100000 ios 00:12:26.450 ======================================================== 00:12:26.450 00:12:26.450 14:17:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:26.450 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.450 [2024-07-12 14:17:18.422881] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:26.450 Initializing NVMe Controllers 00:12:26.450 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:26.450 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:26.450 Namespace ID: 1 size: 0GB 00:12:26.450 Initialization complete. 00:12:26.450 INFO: using host memory buffer for IO 00:12:26.450 Hello world! 00:12:26.450 [2024-07-12 14:17:18.457102] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:26.710 14:17:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:26.710 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.710 [2024-07-12 14:17:18.717822] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:28.089 Initializing NVMe Controllers 00:12:28.089 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:28.089 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:28.089 Initialization complete. Launching workers. 00:12:28.089 submit (in ns) avg, min, max = 6178.8, 3230.4, 4000601.7 00:12:28.089 complete (in ns) avg, min, max = 21006.9, 1782.6, 4031039.1 00:12:28.089 00:12:28.089 Submit histogram 00:12:28.089 ================ 00:12:28.089 Range in us Cumulative Count 00:12:28.089 3.228 - 3.242: 0.0184% ( 3) 00:12:28.089 3.270 - 3.283: 0.0920% ( 12) 00:12:28.089 3.283 - 3.297: 0.6194% ( 86) 00:12:28.089 3.297 - 3.311: 1.9134% ( 211) 00:12:28.089 3.311 - 3.325: 3.1522% ( 202) 00:12:28.089 3.325 - 3.339: 5.2496% ( 342) 00:12:28.089 3.339 - 3.353: 9.8798% ( 755) 00:12:28.089 3.353 - 3.367: 15.1907% ( 866) 00:12:28.089 3.367 - 3.381: 21.3173% ( 999) 00:12:28.089 3.381 - 3.395: 27.7628% ( 1051) 00:12:28.089 3.395 - 3.409: 33.3558% ( 912) 00:12:28.089 3.409 - 3.423: 38.7035% ( 872) 00:12:28.089 3.423 - 3.437: 44.3702% ( 924) 00:12:28.089 3.437 - 3.450: 49.7363% ( 875) 00:12:28.089 3.450 - 3.464: 54.4156% ( 763) 00:12:28.089 3.464 - 3.478: 58.4754% ( 662) 00:12:28.089 3.478 - 3.492: 64.7369% ( 1021) 00:12:28.089 3.492 - 3.506: 70.2686% ( 902) 00:12:28.089 3.506 - 3.520: 73.8930% ( 591) 00:12:28.089 3.520 - 3.534: 78.0204% ( 673) 00:12:28.089 3.534 - 3.548: 81.9085% ( 634) 00:12:28.089 3.548 - 3.562: 84.4536% ( 415) 00:12:28.089 3.562 - 3.590: 86.8515% ( 391) 00:12:28.089 3.590 - 3.617: 88.0719% ( 199) 00:12:28.089 3.617 - 3.645: 89.3046% ( 201) 00:12:28.089 3.645 - 3.673: 90.9420% ( 267) 00:12:28.089 3.673 - 3.701: 92.6775% ( 283) 00:12:28.089 3.701 - 3.729: 94.2353% ( 254) 00:12:28.089 3.729 - 3.757: 95.9340% ( 277) 00:12:28.089 3.757 - 3.784: 97.4672% ( 250) 00:12:28.089 3.784 - 3.812: 98.4239% ( 156) 00:12:28.089 3.812 - 3.840: 99.0372% ( 100) 00:12:28.089 3.840 - 3.868: 99.3622% ( 53) 00:12:28.089 3.868 - 3.896: 99.4849% ( 20) 00:12:28.089 3.896 - 3.923: 99.5768% ( 15) 00:12:28.089 3.923 - 3.951: 99.6136% ( 6) 00:12:28.089 3.951 - 3.979: 99.6198% ( 1) 00:12:28.089 4.007 - 4.035: 99.6259% ( 1) 00:12:28.089 4.146 - 4.174: 99.6320% ( 1) 00:12:28.089 4.257 - 4.285: 99.6382% ( 1) 00:12:28.089 5.259 - 5.287: 99.6443% ( 1) 00:12:28.089 5.426 - 5.454: 99.6504% ( 1) 00:12:28.089 5.482 - 5.510: 99.6566% ( 1) 00:12:28.089 5.649 - 5.677: 99.6627% ( 1) 00:12:28.089 5.704 - 5.732: 99.6688% ( 1) 00:12:28.089 5.788 - 5.816: 99.6811% ( 2) 00:12:28.089 5.927 - 5.955: 99.6872% ( 1) 00:12:28.089 6.177 - 6.205: 99.6934% ( 1) 00:12:28.089 6.261 - 6.289: 99.6995% ( 1) 00:12:28.089 6.483 - 6.511: 99.7118% ( 2) 00:12:28.089 6.567 - 6.595: 99.7179% ( 1) 00:12:28.089 6.623 - 6.650: 99.7240% ( 1) 00:12:28.089 6.650 - 6.678: 99.7302% ( 1) 00:12:28.089 6.706 - 6.734: 99.7363% ( 1) 00:12:28.089 6.762 - 6.790: 99.7424% ( 1) 00:12:28.089 6.790 - 6.817: 99.7486% ( 1) 00:12:28.089 6.817 - 6.845: 99.7608% ( 2) 00:12:28.089 6.845 - 6.873: 99.7731% ( 2) 00:12:28.089 6.873 - 6.901: 99.7792% ( 1) 00:12:28.089 6.901 - 6.929: 99.7854% ( 1) 00:12:28.089 6.929 - 6.957: 99.7915% ( 1) 00:12:28.089 6.984 - 7.012: 99.7976% ( 1) 00:12:28.089 7.040 - 7.068: 99.8099% ( 2) 00:12:28.089 7.123 - 7.179: 99.8222% ( 2) 00:12:28.089 7.179 - 7.235: 99.8283% ( 1) 00:12:28.089 7.402 - 7.457: 99.8344% ( 1) 00:12:28.089 7.513 - 7.569: 99.8467% ( 2) 00:12:28.089 7.624 - 7.680: 99.8589% ( 2) 00:12:28.089 7.736 - 7.791: 99.8651% ( 1) 00:12:28.089 7.847 - 7.903: 99.8712% ( 1) 00:12:28.089 7.958 - 8.014: 99.8773% ( 1) 00:12:28.089 8.070 - 8.125: 99.8835% ( 1) 00:12:28.089 8.292 - 8.348: 99.8896% ( 1) 00:12:28.089 8.403 - 8.459: 99.9019% ( 2) 00:12:28.089 8.459 - 8.515: 99.9080% ( 1) 00:12:28.089 8.570 - 8.626: 99.9141% ( 1) 00:12:28.089 8.682 - 8.737: 99.9203% ( 1) 00:12:28.089 10.518 - 10.574: 99.9264% ( 1) 00:12:28.089 10.741 - 10.797: 99.9325% ( 1) 00:12:28.089 [2024-07-12 14:17:19.739817] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:28.089 3989.148 - 4017.642: 100.0000% ( 11) 00:12:28.089 00:12:28.089 Complete histogram 00:12:28.089 ================== 00:12:28.089 Range in us Cumulative Count 00:12:28.089 1.781 - 1.795: 0.0184% ( 3) 00:12:28.089 1.809 - 1.823: 0.0675% ( 8) 00:12:28.089 1.823 - 1.837: 1.0610% ( 162) 00:12:28.089 1.837 - 1.850: 2.5267% ( 239) 00:12:28.089 1.850 - 1.864: 3.9863% ( 238) 00:12:28.089 1.864 - 1.878: 14.8166% ( 1766) 00:12:28.089 1.878 - 1.892: 63.7127% ( 7973) 00:12:28.089 1.892 - 1.906: 89.2371% ( 4162) 00:12:28.089 1.906 - 1.920: 94.7320% ( 896) 00:12:28.089 1.920 - 1.934: 96.1119% ( 225) 00:12:28.089 1.934 - 1.948: 96.6086% ( 81) 00:12:28.089 1.948 - 1.962: 97.6634% ( 172) 00:12:28.089 1.962 - 1.976: 98.6937% ( 168) 00:12:28.089 1.976 - 1.990: 99.1353% ( 72) 00:12:28.089 1.990 - 2.003: 99.2395% ( 17) 00:12:28.089 2.003 - 2.017: 99.2763% ( 6) 00:12:28.089 2.017 - 2.031: 99.2825% ( 1) 00:12:28.089 2.031 - 2.045: 99.2886% ( 1) 00:12:28.089 2.045 - 2.059: 99.2947% ( 1) 00:12:28.089 2.059 - 2.073: 99.3070% ( 2) 00:12:28.089 2.073 - 2.087: 99.3254% ( 3) 00:12:28.089 2.087 - 2.101: 99.3315% ( 1) 00:12:28.089 2.101 - 2.115: 99.3438% ( 2) 00:12:28.089 2.129 - 2.143: 99.3499% ( 1) 00:12:28.089 2.157 - 2.170: 99.3561% ( 1) 00:12:28.089 4.035 - 4.063: 99.3622% ( 1) 00:12:28.089 4.063 - 4.090: 99.3683% ( 1) 00:12:28.089 4.424 - 4.452: 99.3745% ( 1) 00:12:28.089 4.563 - 4.591: 99.3806% ( 1) 00:12:28.089 4.591 - 4.619: 99.3867% ( 1) 00:12:28.089 4.675 - 4.703: 99.3929% ( 1) 00:12:28.089 4.870 - 4.897: 99.3990% ( 1) 00:12:28.089 4.953 - 4.981: 99.4051% ( 1) 00:12:28.089 5.315 - 5.343: 99.4113% ( 1) 00:12:28.089 5.510 - 5.537: 99.4174% ( 1) 00:12:28.089 5.537 - 5.565: 99.4235% ( 1) 00:12:28.089 5.677 - 5.704: 99.4297% ( 1) 00:12:28.089 5.983 - 6.010: 99.4358% ( 1) 00:12:28.089 6.038 - 6.066: 99.4481% ( 2) 00:12:28.089 6.066 - 6.094: 99.4542% ( 1) 00:12:28.089 6.177 - 6.205: 99.4603% ( 1) 00:12:28.089 6.205 - 6.233: 99.4665% ( 1) 00:12:28.089 6.344 - 6.372: 99.4726% ( 1) 00:12:28.089 6.567 - 6.595: 99.4787% ( 1) 00:12:28.089 6.929 - 6.957: 99.4849% ( 1) 00:12:28.089 7.123 - 7.179: 99.4910% ( 1) 00:12:28.089 7.513 - 7.569: 99.4971% ( 1) 00:12:28.089 7.680 - 7.736: 99.5033% ( 1) 00:12:28.089 8.348 - 8.403: 99.5094% ( 1) 00:12:28.089 8.459 - 8.515: 99.5155% ( 1) 00:12:28.089 12.410 - 12.466: 99.5216% ( 1) 00:12:28.089 3989.148 - 4017.642: 99.9939% ( 77) 00:12:28.089 4017.642 - 4046.136: 100.0000% ( 1) 00:12:28.089 00:12:28.089 14:17:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:28.089 14:17:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:28.089 14:17:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:28.090 14:17:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:28.090 14:17:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:28.090 [ 00:12:28.090 { 00:12:28.090 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:28.090 "subtype": "Discovery", 00:12:28.090 "listen_addresses": [], 00:12:28.090 "allow_any_host": true, 00:12:28.090 "hosts": [] 00:12:28.090 }, 00:12:28.090 { 00:12:28.090 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:28.090 "subtype": "NVMe", 00:12:28.090 "listen_addresses": [ 00:12:28.090 { 00:12:28.090 "trtype": "VFIOUSER", 00:12:28.090 "adrfam": "IPv4", 00:12:28.090 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:28.090 "trsvcid": "0" 00:12:28.090 } 00:12:28.090 ], 00:12:28.090 "allow_any_host": true, 00:12:28.090 "hosts": [], 00:12:28.090 "serial_number": "SPDK1", 00:12:28.090 "model_number": "SPDK bdev Controller", 00:12:28.090 "max_namespaces": 32, 00:12:28.090 "min_cntlid": 1, 00:12:28.090 "max_cntlid": 65519, 00:12:28.090 "namespaces": [ 00:12:28.090 { 00:12:28.090 "nsid": 1, 00:12:28.090 "bdev_name": "Malloc1", 00:12:28.090 "name": "Malloc1", 00:12:28.090 "nguid": "E4620C97A459475FA6F3A0F37B46E4FD", 00:12:28.090 "uuid": "e4620c97-a459-475f-a6f3-a0f37b46e4fd" 00:12:28.090 } 00:12:28.090 ] 00:12:28.090 }, 00:12:28.090 { 00:12:28.090 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:28.090 "subtype": "NVMe", 00:12:28.090 "listen_addresses": [ 00:12:28.090 { 00:12:28.090 "trtype": "VFIOUSER", 00:12:28.090 "adrfam": "IPv4", 00:12:28.090 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:28.090 "trsvcid": "0" 00:12:28.090 } 00:12:28.090 ], 00:12:28.090 "allow_any_host": true, 00:12:28.090 "hosts": [], 00:12:28.090 "serial_number": "SPDK2", 00:12:28.090 "model_number": "SPDK bdev Controller", 00:12:28.090 "max_namespaces": 32, 00:12:28.090 "min_cntlid": 1, 00:12:28.090 "max_cntlid": 65519, 00:12:28.090 "namespaces": [ 00:12:28.090 { 00:12:28.090 "nsid": 1, 00:12:28.090 "bdev_name": "Malloc2", 00:12:28.090 "name": "Malloc2", 00:12:28.090 "nguid": "588B11A68AC949458FBBEDE15C6DB314", 00:12:28.090 "uuid": "588b11a6-8ac9-4945-8fbb-ede15c6db314" 00:12:28.090 } 00:12:28.090 ] 00:12:28.090 } 00:12:28.090 ] 00:12:28.090 14:17:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:28.090 14:17:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:28.090 14:17:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2471840 00:12:28.090 14:17:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:28.090 14:17:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:28.090 14:17:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:28.090 14:17:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:28.090 14:17:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:28.090 14:17:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:28.090 14:17:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:28.090 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.090 [2024-07-12 14:17:20.096955] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:28.349 Malloc3 00:12:28.349 14:17:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:28.349 [2024-07-12 14:17:20.338813] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:28.608 14:17:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:28.608 Asynchronous Event Request test 00:12:28.608 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:28.608 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:28.608 Registering asynchronous event callbacks... 00:12:28.608 Starting namespace attribute notice tests for all controllers... 00:12:28.608 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:28.608 aer_cb - Changed Namespace 00:12:28.608 Cleaning up... 00:12:28.608 [ 00:12:28.608 { 00:12:28.608 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:28.608 "subtype": "Discovery", 00:12:28.608 "listen_addresses": [], 00:12:28.608 "allow_any_host": true, 00:12:28.608 "hosts": [] 00:12:28.608 }, 00:12:28.608 { 00:12:28.608 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:28.608 "subtype": "NVMe", 00:12:28.608 "listen_addresses": [ 00:12:28.608 { 00:12:28.608 "trtype": "VFIOUSER", 00:12:28.608 "adrfam": "IPv4", 00:12:28.608 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:28.608 "trsvcid": "0" 00:12:28.608 } 00:12:28.608 ], 00:12:28.608 "allow_any_host": true, 00:12:28.608 "hosts": [], 00:12:28.608 "serial_number": "SPDK1", 00:12:28.608 "model_number": "SPDK bdev Controller", 00:12:28.608 "max_namespaces": 32, 00:12:28.608 "min_cntlid": 1, 00:12:28.608 "max_cntlid": 65519, 00:12:28.608 "namespaces": [ 00:12:28.608 { 00:12:28.608 "nsid": 1, 00:12:28.608 "bdev_name": "Malloc1", 00:12:28.608 "name": "Malloc1", 00:12:28.608 "nguid": "E4620C97A459475FA6F3A0F37B46E4FD", 00:12:28.608 "uuid": "e4620c97-a459-475f-a6f3-a0f37b46e4fd" 00:12:28.608 }, 00:12:28.608 { 00:12:28.608 "nsid": 2, 00:12:28.608 "bdev_name": "Malloc3", 00:12:28.608 "name": "Malloc3", 00:12:28.608 "nguid": "2CF79F137D4B483B8CB91D4B9D7E1EA8", 00:12:28.608 "uuid": "2cf79f13-7d4b-483b-8cb9-1d4b9d7e1ea8" 00:12:28.608 } 00:12:28.608 ] 00:12:28.608 }, 00:12:28.608 { 00:12:28.608 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:28.608 "subtype": "NVMe", 00:12:28.608 "listen_addresses": [ 00:12:28.608 { 00:12:28.608 "trtype": "VFIOUSER", 00:12:28.608 "adrfam": "IPv4", 00:12:28.608 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:28.608 "trsvcid": "0" 00:12:28.608 } 00:12:28.608 ], 00:12:28.608 "allow_any_host": true, 00:12:28.608 "hosts": [], 00:12:28.608 "serial_number": "SPDK2", 00:12:28.608 "model_number": "SPDK bdev Controller", 00:12:28.608 "max_namespaces": 32, 00:12:28.608 "min_cntlid": 1, 00:12:28.608 "max_cntlid": 65519, 00:12:28.608 "namespaces": [ 00:12:28.608 { 00:12:28.608 "nsid": 1, 00:12:28.608 "bdev_name": "Malloc2", 00:12:28.608 "name": "Malloc2", 00:12:28.608 "nguid": "588B11A68AC949458FBBEDE15C6DB314", 00:12:28.608 "uuid": "588b11a6-8ac9-4945-8fbb-ede15c6db314" 00:12:28.608 } 00:12:28.608 ] 00:12:28.608 } 00:12:28.608 ] 00:12:28.609 14:17:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2471840 00:12:28.609 14:17:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:28.609 14:17:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:28.609 14:17:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:28.609 14:17:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:28.609 [2024-07-12 14:17:20.571896] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:12:28.609 [2024-07-12 14:17:20.571926] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471854 ] 00:12:28.609 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.609 [2024-07-12 14:17:20.599768] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:28.609 [2024-07-12 14:17:20.602565] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:28.609 [2024-07-12 14:17:20.602584] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd2cd64c000 00:12:28.609 [2024-07-12 14:17:20.603571] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:28.609 [2024-07-12 14:17:20.604574] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:28.609 [2024-07-12 14:17:20.605581] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:28.609 [2024-07-12 14:17:20.606593] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:28.609 [2024-07-12 14:17:20.607600] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:28.609 [2024-07-12 14:17:20.608613] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:28.609 [2024-07-12 14:17:20.609623] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:28.609 [2024-07-12 14:17:20.610629] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:28.609 [2024-07-12 14:17:20.611646] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:28.609 [2024-07-12 14:17:20.611655] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd2cd641000 00:12:28.609 [2024-07-12 14:17:20.612601] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:28.869 [2024-07-12 14:17:20.625109] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:28.870 [2024-07-12 14:17:20.625130] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:28.870 [2024-07-12 14:17:20.628382] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:28.870 [2024-07-12 14:17:20.628418] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:28.870 [2024-07-12 14:17:20.628483] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:28.870 [2024-07-12 14:17:20.628499] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:28.870 [2024-07-12 14:17:20.628503] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:28.870 [2024-07-12 14:17:20.629212] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:28.870 [2024-07-12 14:17:20.629221] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:28.870 [2024-07-12 14:17:20.629227] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:28.870 [2024-07-12 14:17:20.630215] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:28.870 [2024-07-12 14:17:20.630224] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:28.870 [2024-07-12 14:17:20.630230] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:28.870 [2024-07-12 14:17:20.631217] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:28.870 [2024-07-12 14:17:20.631226] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:28.870 [2024-07-12 14:17:20.632227] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:28.870 [2024-07-12 14:17:20.632235] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:28.870 [2024-07-12 14:17:20.632239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:28.870 [2024-07-12 14:17:20.632245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:28.870 [2024-07-12 14:17:20.632350] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:28.870 [2024-07-12 14:17:20.632354] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:28.870 [2024-07-12 14:17:20.632358] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:28.870 [2024-07-12 14:17:20.633233] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:28.870 [2024-07-12 14:17:20.634234] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:28.870 [2024-07-12 14:17:20.635243] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:28.870 [2024-07-12 14:17:20.636254] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:28.870 [2024-07-12 14:17:20.636289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:28.870 [2024-07-12 14:17:20.637265] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:28.870 [2024-07-12 14:17:20.637274] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:28.870 [2024-07-12 14:17:20.637278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.637295] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:28.870 [2024-07-12 14:17:20.637304] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.637315] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:28.870 [2024-07-12 14:17:20.637319] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:28.870 [2024-07-12 14:17:20.637330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:28.870 [2024-07-12 14:17:20.643384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:28.870 [2024-07-12 14:17:20.643395] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:28.870 [2024-07-12 14:17:20.643402] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:28.870 [2024-07-12 14:17:20.643406] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:28.870 [2024-07-12 14:17:20.643411] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:28.870 [2024-07-12 14:17:20.643415] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:28.870 [2024-07-12 14:17:20.643419] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:28.870 [2024-07-12 14:17:20.643423] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.643430] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.643439] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:28.870 [2024-07-12 14:17:20.651383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:28.870 [2024-07-12 14:17:20.651397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.870 [2024-07-12 14:17:20.651404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.870 [2024-07-12 14:17:20.651411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.870 [2024-07-12 14:17:20.651419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.870 [2024-07-12 14:17:20.651425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.651432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.651440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:28.870 [2024-07-12 14:17:20.659382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:28.870 [2024-07-12 14:17:20.659389] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:28.870 [2024-07-12 14:17:20.659393] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.659399] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.659404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.659412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:28.870 [2024-07-12 14:17:20.667382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:28.870 [2024-07-12 14:17:20.667434] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.667441] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.667448] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:28.870 [2024-07-12 14:17:20.667452] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:28.870 [2024-07-12 14:17:20.667458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:28.870 [2024-07-12 14:17:20.675383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:28.870 [2024-07-12 14:17:20.675393] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:28.870 [2024-07-12 14:17:20.675400] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.675407] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.675413] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:28.870 [2024-07-12 14:17:20.675417] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:28.870 [2024-07-12 14:17:20.675422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:28.870 [2024-07-12 14:17:20.683382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:28.870 [2024-07-12 14:17:20.683396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.683403] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.683411] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:28.870 [2024-07-12 14:17:20.683415] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:28.870 [2024-07-12 14:17:20.683421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:28.870 [2024-07-12 14:17:20.691383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:28.870 [2024-07-12 14:17:20.691394] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.691400] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.691409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.691414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:28.870 [2024-07-12 14:17:20.691418] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:28.871 [2024-07-12 14:17:20.691423] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:28.871 [2024-07-12 14:17:20.691427] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:28.871 [2024-07-12 14:17:20.691431] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:28.871 [2024-07-12 14:17:20.691435] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:28.871 [2024-07-12 14:17:20.691450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:28.871 [2024-07-12 14:17:20.699384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:28.871 [2024-07-12 14:17:20.699396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:28.871 [2024-07-12 14:17:20.707384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:28.871 [2024-07-12 14:17:20.707396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:28.871 [2024-07-12 14:17:20.715384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:28.871 [2024-07-12 14:17:20.715395] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:28.871 [2024-07-12 14:17:20.723383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:28.871 [2024-07-12 14:17:20.723398] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:28.871 [2024-07-12 14:17:20.723403] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:28.871 [2024-07-12 14:17:20.723406] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:28.871 [2024-07-12 14:17:20.723409] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:28.871 [2024-07-12 14:17:20.723415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:28.871 [2024-07-12 14:17:20.723423] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:28.871 [2024-07-12 14:17:20.723427] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:28.871 [2024-07-12 14:17:20.723432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:28.871 [2024-07-12 14:17:20.723438] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:28.871 [2024-07-12 14:17:20.723442] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:28.871 [2024-07-12 14:17:20.723447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:28.871 [2024-07-12 14:17:20.723454] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:28.871 [2024-07-12 14:17:20.723457] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:28.871 [2024-07-12 14:17:20.723463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:28.871 [2024-07-12 14:17:20.731384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:28.871 [2024-07-12 14:17:20.731398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:28.871 [2024-07-12 14:17:20.731407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:28.871 [2024-07-12 14:17:20.731414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:28.871 ===================================================== 00:12:28.871 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:28.871 ===================================================== 00:12:28.871 Controller Capabilities/Features 00:12:28.871 ================================ 00:12:28.871 Vendor ID: 4e58 00:12:28.871 Subsystem Vendor ID: 4e58 00:12:28.871 Serial Number: SPDK2 00:12:28.871 Model Number: SPDK bdev Controller 00:12:28.871 Firmware Version: 24.09 00:12:28.871 Recommended Arb Burst: 6 00:12:28.871 IEEE OUI Identifier: 8d 6b 50 00:12:28.871 Multi-path I/O 00:12:28.871 May have multiple subsystem ports: Yes 00:12:28.871 May have multiple controllers: Yes 00:12:28.871 Associated with SR-IOV VF: No 00:12:28.871 Max Data Transfer Size: 131072 00:12:28.871 Max Number of Namespaces: 32 00:12:28.871 Max Number of I/O Queues: 127 00:12:28.871 NVMe Specification Version (VS): 1.3 00:12:28.871 NVMe Specification Version (Identify): 1.3 00:12:28.871 Maximum Queue Entries: 256 00:12:28.871 Contiguous Queues Required: Yes 00:12:28.871 Arbitration Mechanisms Supported 00:12:28.871 Weighted Round Robin: Not Supported 00:12:28.871 Vendor Specific: Not Supported 00:12:28.871 Reset Timeout: 15000 ms 00:12:28.871 Doorbell Stride: 4 bytes 00:12:28.871 NVM Subsystem Reset: Not Supported 00:12:28.871 Command Sets Supported 00:12:28.871 NVM Command Set: Supported 00:12:28.871 Boot Partition: Not Supported 00:12:28.871 Memory Page Size Minimum: 4096 bytes 00:12:28.871 Memory Page Size Maximum: 4096 bytes 00:12:28.871 Persistent Memory Region: Not Supported 00:12:28.871 Optional Asynchronous Events Supported 00:12:28.871 Namespace Attribute Notices: Supported 00:12:28.871 Firmware Activation Notices: Not Supported 00:12:28.871 ANA Change Notices: Not Supported 00:12:28.871 PLE Aggregate Log Change Notices: Not Supported 00:12:28.871 LBA Status Info Alert Notices: Not Supported 00:12:28.871 EGE Aggregate Log Change Notices: Not Supported 00:12:28.871 Normal NVM Subsystem Shutdown event: Not Supported 00:12:28.871 Zone Descriptor Change Notices: Not Supported 00:12:28.871 Discovery Log Change Notices: Not Supported 00:12:28.871 Controller Attributes 00:12:28.871 128-bit Host Identifier: Supported 00:12:28.871 Non-Operational Permissive Mode: Not Supported 00:12:28.871 NVM Sets: Not Supported 00:12:28.871 Read Recovery Levels: Not Supported 00:12:28.871 Endurance Groups: Not Supported 00:12:28.871 Predictable Latency Mode: Not Supported 00:12:28.871 Traffic Based Keep ALive: Not Supported 00:12:28.871 Namespace Granularity: Not Supported 00:12:28.871 SQ Associations: Not Supported 00:12:28.871 UUID List: Not Supported 00:12:28.871 Multi-Domain Subsystem: Not Supported 00:12:28.871 Fixed Capacity Management: Not Supported 00:12:28.871 Variable Capacity Management: Not Supported 00:12:28.871 Delete Endurance Group: Not Supported 00:12:28.871 Delete NVM Set: Not Supported 00:12:28.871 Extended LBA Formats Supported: Not Supported 00:12:28.871 Flexible Data Placement Supported: Not Supported 00:12:28.871 00:12:28.871 Controller Memory Buffer Support 00:12:28.871 ================================ 00:12:28.871 Supported: No 00:12:28.871 00:12:28.871 Persistent Memory Region Support 00:12:28.871 ================================ 00:12:28.871 Supported: No 00:12:28.871 00:12:28.871 Admin Command Set Attributes 00:12:28.871 ============================ 00:12:28.871 Security Send/Receive: Not Supported 00:12:28.871 Format NVM: Not Supported 00:12:28.871 Firmware Activate/Download: Not Supported 00:12:28.871 Namespace Management: Not Supported 00:12:28.871 Device Self-Test: Not Supported 00:12:28.871 Directives: Not Supported 00:12:28.871 NVMe-MI: Not Supported 00:12:28.871 Virtualization Management: Not Supported 00:12:28.871 Doorbell Buffer Config: Not Supported 00:12:28.871 Get LBA Status Capability: Not Supported 00:12:28.871 Command & Feature Lockdown Capability: Not Supported 00:12:28.871 Abort Command Limit: 4 00:12:28.871 Async Event Request Limit: 4 00:12:28.871 Number of Firmware Slots: N/A 00:12:28.871 Firmware Slot 1 Read-Only: N/A 00:12:28.871 Firmware Activation Without Reset: N/A 00:12:28.871 Multiple Update Detection Support: N/A 00:12:28.871 Firmware Update Granularity: No Information Provided 00:12:28.871 Per-Namespace SMART Log: No 00:12:28.871 Asymmetric Namespace Access Log Page: Not Supported 00:12:28.871 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:28.871 Command Effects Log Page: Supported 00:12:28.871 Get Log Page Extended Data: Supported 00:12:28.871 Telemetry Log Pages: Not Supported 00:12:28.871 Persistent Event Log Pages: Not Supported 00:12:28.871 Supported Log Pages Log Page: May Support 00:12:28.871 Commands Supported & Effects Log Page: Not Supported 00:12:28.871 Feature Identifiers & Effects Log Page:May Support 00:12:28.871 NVMe-MI Commands & Effects Log Page: May Support 00:12:28.871 Data Area 4 for Telemetry Log: Not Supported 00:12:28.872 Error Log Page Entries Supported: 128 00:12:28.872 Keep Alive: Supported 00:12:28.872 Keep Alive Granularity: 10000 ms 00:12:28.872 00:12:28.872 NVM Command Set Attributes 00:12:28.872 ========================== 00:12:28.872 Submission Queue Entry Size 00:12:28.872 Max: 64 00:12:28.872 Min: 64 00:12:28.872 Completion Queue Entry Size 00:12:28.872 Max: 16 00:12:28.872 Min: 16 00:12:28.872 Number of Namespaces: 32 00:12:28.872 Compare Command: Supported 00:12:28.872 Write Uncorrectable Command: Not Supported 00:12:28.872 Dataset Management Command: Supported 00:12:28.872 Write Zeroes Command: Supported 00:12:28.872 Set Features Save Field: Not Supported 00:12:28.872 Reservations: Not Supported 00:12:28.872 Timestamp: Not Supported 00:12:28.872 Copy: Supported 00:12:28.872 Volatile Write Cache: Present 00:12:28.872 Atomic Write Unit (Normal): 1 00:12:28.872 Atomic Write Unit (PFail): 1 00:12:28.872 Atomic Compare & Write Unit: 1 00:12:28.872 Fused Compare & Write: Supported 00:12:28.872 Scatter-Gather List 00:12:28.872 SGL Command Set: Supported (Dword aligned) 00:12:28.872 SGL Keyed: Not Supported 00:12:28.872 SGL Bit Bucket Descriptor: Not Supported 00:12:28.872 SGL Metadata Pointer: Not Supported 00:12:28.872 Oversized SGL: Not Supported 00:12:28.872 SGL Metadata Address: Not Supported 00:12:28.872 SGL Offset: Not Supported 00:12:28.872 Transport SGL Data Block: Not Supported 00:12:28.872 Replay Protected Memory Block: Not Supported 00:12:28.872 00:12:28.872 Firmware Slot Information 00:12:28.872 ========================= 00:12:28.872 Active slot: 1 00:12:28.872 Slot 1 Firmware Revision: 24.09 00:12:28.872 00:12:28.872 00:12:28.872 Commands Supported and Effects 00:12:28.872 ============================== 00:12:28.872 Admin Commands 00:12:28.872 -------------- 00:12:28.872 Get Log Page (02h): Supported 00:12:28.872 Identify (06h): Supported 00:12:28.872 Abort (08h): Supported 00:12:28.872 Set Features (09h): Supported 00:12:28.872 Get Features (0Ah): Supported 00:12:28.872 Asynchronous Event Request (0Ch): Supported 00:12:28.872 Keep Alive (18h): Supported 00:12:28.872 I/O Commands 00:12:28.872 ------------ 00:12:28.872 Flush (00h): Supported LBA-Change 00:12:28.872 Write (01h): Supported LBA-Change 00:12:28.872 Read (02h): Supported 00:12:28.872 Compare (05h): Supported 00:12:28.872 Write Zeroes (08h): Supported LBA-Change 00:12:28.872 Dataset Management (09h): Supported LBA-Change 00:12:28.872 Copy (19h): Supported LBA-Change 00:12:28.872 00:12:28.872 Error Log 00:12:28.872 ========= 00:12:28.872 00:12:28.872 Arbitration 00:12:28.872 =========== 00:12:28.872 Arbitration Burst: 1 00:12:28.872 00:12:28.872 Power Management 00:12:28.872 ================ 00:12:28.872 Number of Power States: 1 00:12:28.872 Current Power State: Power State #0 00:12:28.872 Power State #0: 00:12:28.872 Max Power: 0.00 W 00:12:28.872 Non-Operational State: Operational 00:12:28.872 Entry Latency: Not Reported 00:12:28.872 Exit Latency: Not Reported 00:12:28.872 Relative Read Throughput: 0 00:12:28.872 Relative Read Latency: 0 00:12:28.872 Relative Write Throughput: 0 00:12:28.872 Relative Write Latency: 0 00:12:28.872 Idle Power: Not Reported 00:12:28.872 Active Power: Not Reported 00:12:28.872 Non-Operational Permissive Mode: Not Supported 00:12:28.872 00:12:28.872 Health Information 00:12:28.872 ================== 00:12:28.872 Critical Warnings: 00:12:28.872 Available Spare Space: OK 00:12:28.872 Temperature: OK 00:12:28.872 Device Reliability: OK 00:12:28.872 Read Only: No 00:12:28.872 Volatile Memory Backup: OK 00:12:28.872 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:28.872 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:28.872 Available Spare: 0% 00:12:28.872 Available Sp[2024-07-12 14:17:20.731500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:28.872 [2024-07-12 14:17:20.739385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:28.872 [2024-07-12 14:17:20.739426] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:28.872 [2024-07-12 14:17:20.739435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.872 [2024-07-12 14:17:20.739441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.872 [2024-07-12 14:17:20.739446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.872 [2024-07-12 14:17:20.739452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.872 [2024-07-12 14:17:20.739508] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:28.872 [2024-07-12 14:17:20.739518] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:28.872 [2024-07-12 14:17:20.740504] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:28.872 [2024-07-12 14:17:20.740546] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:28.872 [2024-07-12 14:17:20.740552] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:28.872 [2024-07-12 14:17:20.741517] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:28.872 [2024-07-12 14:17:20.741528] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:28.872 [2024-07-12 14:17:20.741575] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:28.872 [2024-07-12 14:17:20.744384] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:28.872 are Threshold: 0% 00:12:28.872 Life Percentage Used: 0% 00:12:28.872 Data Units Read: 0 00:12:28.872 Data Units Written: 0 00:12:28.872 Host Read Commands: 0 00:12:28.872 Host Write Commands: 0 00:12:28.872 Controller Busy Time: 0 minutes 00:12:28.872 Power Cycles: 0 00:12:28.872 Power On Hours: 0 hours 00:12:28.872 Unsafe Shutdowns: 0 00:12:28.872 Unrecoverable Media Errors: 0 00:12:28.872 Lifetime Error Log Entries: 0 00:12:28.872 Warning Temperature Time: 0 minutes 00:12:28.872 Critical Temperature Time: 0 minutes 00:12:28.872 00:12:28.872 Number of Queues 00:12:28.872 ================ 00:12:28.872 Number of I/O Submission Queues: 127 00:12:28.872 Number of I/O Completion Queues: 127 00:12:28.872 00:12:28.872 Active Namespaces 00:12:28.872 ================= 00:12:28.872 Namespace ID:1 00:12:28.872 Error Recovery Timeout: Unlimited 00:12:28.872 Command Set Identifier: NVM (00h) 00:12:28.872 Deallocate: Supported 00:12:28.872 Deallocated/Unwritten Error: Not Supported 00:12:28.872 Deallocated Read Value: Unknown 00:12:28.872 Deallocate in Write Zeroes: Not Supported 00:12:28.873 Deallocated Guard Field: 0xFFFF 00:12:28.873 Flush: Supported 00:12:28.873 Reservation: Supported 00:12:28.873 Namespace Sharing Capabilities: Multiple Controllers 00:12:28.873 Size (in LBAs): 131072 (0GiB) 00:12:28.873 Capacity (in LBAs): 131072 (0GiB) 00:12:28.873 Utilization (in LBAs): 131072 (0GiB) 00:12:28.873 NGUID: 588B11A68AC949458FBBEDE15C6DB314 00:12:28.873 UUID: 588b11a6-8ac9-4945-8fbb-ede15c6db314 00:12:28.873 Thin Provisioning: Not Supported 00:12:28.873 Per-NS Atomic Units: Yes 00:12:28.873 Atomic Boundary Size (Normal): 0 00:12:28.873 Atomic Boundary Size (PFail): 0 00:12:28.873 Atomic Boundary Offset: 0 00:12:28.873 Maximum Single Source Range Length: 65535 00:12:28.873 Maximum Copy Length: 65535 00:12:28.873 Maximum Source Range Count: 1 00:12:28.873 NGUID/EUI64 Never Reused: No 00:12:28.873 Namespace Write Protected: No 00:12:28.873 Number of LBA Formats: 1 00:12:28.873 Current LBA Format: LBA Format #00 00:12:28.873 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:28.873 00:12:28.873 14:17:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:28.873 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.132 [2024-07-12 14:17:20.955760] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:34.406 Initializing NVMe Controllers 00:12:34.406 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:34.406 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:34.406 Initialization complete. Launching workers. 00:12:34.406 ======================================================== 00:12:34.406 Latency(us) 00:12:34.406 Device Information : IOPS MiB/s Average min max 00:12:34.406 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39917.96 155.93 3206.18 977.58 6773.06 00:12:34.406 ======================================================== 00:12:34.406 Total : 39917.96 155.93 3206.18 977.58 6773.06 00:12:34.406 00:12:34.406 [2024-07-12 14:17:26.061620] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:34.406 14:17:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:34.406 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.406 [2024-07-12 14:17:26.276267] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:39.676 Initializing NVMe Controllers 00:12:39.676 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:39.676 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:39.676 Initialization complete. Launching workers. 00:12:39.676 ======================================================== 00:12:39.677 Latency(us) 00:12:39.677 Device Information : IOPS MiB/s Average min max 00:12:39.677 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39934.19 155.99 3205.10 992.33 7151.64 00:12:39.677 ======================================================== 00:12:39.677 Total : 39934.19 155.99 3205.10 992.33 7151.64 00:12:39.677 00:12:39.677 [2024-07-12 14:17:31.296646] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:39.677 14:17:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:39.677 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.677 [2024-07-12 14:17:31.494941] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:44.983 [2024-07-12 14:17:36.625479] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:44.983 Initializing NVMe Controllers 00:12:44.983 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:44.983 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:44.983 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:44.983 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:44.983 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:44.983 Initialization complete. Launching workers. 00:12:44.983 Starting thread on core 2 00:12:44.983 Starting thread on core 3 00:12:44.983 Starting thread on core 1 00:12:44.983 14:17:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:44.983 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.983 [2024-07-12 14:17:36.909805] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:48.271 [2024-07-12 14:17:39.966529] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:48.271 Initializing NVMe Controllers 00:12:48.271 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:48.271 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:48.271 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:48.271 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:48.271 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:48.271 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:48.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:48.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:48.271 Initialization complete. Launching workers. 00:12:48.271 Starting thread on core 1 with urgent priority queue 00:12:48.271 Starting thread on core 2 with urgent priority queue 00:12:48.271 Starting thread on core 3 with urgent priority queue 00:12:48.271 Starting thread on core 0 with urgent priority queue 00:12:48.271 SPDK bdev Controller (SPDK2 ) core 0: 5940.00 IO/s 16.84 secs/100000 ios 00:12:48.271 SPDK bdev Controller (SPDK2 ) core 1: 8285.00 IO/s 12.07 secs/100000 ios 00:12:48.271 SPDK bdev Controller (SPDK2 ) core 2: 7191.33 IO/s 13.91 secs/100000 ios 00:12:48.271 SPDK bdev Controller (SPDK2 ) core 3: 10363.00 IO/s 9.65 secs/100000 ios 00:12:48.271 ======================================================== 00:12:48.271 00:12:48.271 14:17:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:48.271 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.271 [2024-07-12 14:17:40.235810] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:48.271 Initializing NVMe Controllers 00:12:48.271 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:48.271 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:48.271 Namespace ID: 1 size: 0GB 00:12:48.271 Initialization complete. 00:12:48.271 INFO: using host memory buffer for IO 00:12:48.271 Hello world! 00:12:48.271 [2024-07-12 14:17:40.245875] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:48.530 14:17:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:48.530 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.530 [2024-07-12 14:17:40.514140] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:49.909 Initializing NVMe Controllers 00:12:49.909 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.909 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.909 Initialization complete. Launching workers. 00:12:49.909 submit (in ns) avg, min, max = 8279.0, 3260.0, 5992561.7 00:12:49.909 complete (in ns) avg, min, max = 18925.7, 1841.7, 5991713.9 00:12:49.909 00:12:49.909 Submit histogram 00:12:49.909 ================ 00:12:49.909 Range in us Cumulative Count 00:12:49.909 3.256 - 3.270: 0.0433% ( 7) 00:12:49.909 3.270 - 3.283: 0.3151% ( 44) 00:12:49.909 3.283 - 3.297: 1.5016% ( 192) 00:12:49.909 3.297 - 3.311: 4.2452% ( 444) 00:12:49.909 3.311 - 3.325: 7.8910% ( 590) 00:12:49.909 3.325 - 3.339: 12.3216% ( 717) 00:12:49.909 3.339 - 3.353: 17.8397% ( 893) 00:12:49.909 3.353 - 3.367: 23.2652% ( 878) 00:12:49.909 3.367 - 3.381: 28.4558% ( 840) 00:12:49.909 3.381 - 3.395: 34.7587% ( 1020) 00:12:49.909 3.395 - 3.409: 40.0791% ( 861) 00:12:49.909 3.409 - 3.423: 43.8299% ( 607) 00:12:49.909 3.423 - 3.437: 48.0628% ( 685) 00:12:49.909 3.437 - 3.450: 53.9393% ( 951) 00:12:49.909 3.450 - 3.464: 59.9951% ( 980) 00:12:49.909 3.464 - 3.478: 63.9375% ( 638) 00:12:49.909 3.478 - 3.492: 69.2393% ( 858) 00:12:49.909 3.492 - 3.506: 75.0232% ( 936) 00:12:49.909 3.506 - 3.520: 78.9409% ( 634) 00:12:49.909 3.520 - 3.534: 82.2097% ( 529) 00:12:49.909 3.534 - 3.548: 84.7309% ( 408) 00:12:49.909 3.548 - 3.562: 86.3004% ( 254) 00:12:49.909 3.562 - 3.590: 88.1233% ( 295) 00:12:49.909 3.590 - 3.617: 89.3407% ( 197) 00:12:49.909 3.617 - 3.645: 90.6878% ( 218) 00:12:49.909 3.645 - 3.673: 92.4489% ( 285) 00:12:49.909 3.673 - 3.701: 93.9690% ( 246) 00:12:49.909 3.701 - 3.729: 95.3531% ( 224) 00:12:49.909 3.729 - 3.757: 96.8424% ( 241) 00:12:49.909 3.757 - 3.784: 97.9979% ( 187) 00:12:49.909 3.784 - 3.812: 98.6900% ( 112) 00:12:49.909 3.812 - 3.840: 99.1411% ( 73) 00:12:49.909 3.840 - 3.868: 99.4377% ( 48) 00:12:49.909 3.868 - 3.896: 99.5427% ( 17) 00:12:49.909 3.896 - 3.923: 99.5674% ( 4) 00:12:49.909 3.923 - 3.951: 99.5860% ( 3) 00:12:49.909 3.951 - 3.979: 99.5922% ( 1) 00:12:49.909 5.259 - 5.287: 99.5983% ( 1) 00:12:49.909 5.370 - 5.398: 99.6107% ( 2) 00:12:49.909 5.510 - 5.537: 99.6169% ( 1) 00:12:49.909 5.593 - 5.621: 99.6292% ( 2) 00:12:49.909 5.621 - 5.649: 99.6354% ( 1) 00:12:49.909 5.816 - 5.843: 99.6478% ( 2) 00:12:49.909 5.955 - 5.983: 99.6540% ( 1) 00:12:49.909 6.010 - 6.038: 99.6601% ( 1) 00:12:49.909 6.038 - 6.066: 99.6663% ( 1) 00:12:49.909 6.233 - 6.261: 99.6725% ( 1) 00:12:49.909 6.261 - 6.289: 99.6787% ( 1) 00:12:49.909 6.289 - 6.317: 99.6849% ( 1) 00:12:49.909 6.317 - 6.344: 99.6910% ( 1) 00:12:49.909 6.400 - 6.428: 99.6972% ( 1) 00:12:49.909 6.428 - 6.456: 99.7096% ( 2) 00:12:49.909 6.456 - 6.483: 99.7281% ( 3) 00:12:49.909 6.483 - 6.511: 99.7343% ( 1) 00:12:49.909 6.511 - 6.539: 99.7466% ( 2) 00:12:49.909 6.539 - 6.567: 99.7528% ( 1) 00:12:49.909 6.623 - 6.650: 99.7590% ( 1) 00:12:49.910 6.650 - 6.678: 99.7652% ( 1) 00:12:49.910 6.734 - 6.762: 99.7714% ( 1) 00:12:49.910 6.762 - 6.790: 99.7775% ( 1) 00:12:49.910 6.790 - 6.817: 99.7837% ( 1) 00:12:49.910 6.845 - 6.873: 99.7899% ( 1) 00:12:49.910 7.040 - 7.068: 99.7961% ( 1) 00:12:49.910 7.068 - 7.096: 99.8023% ( 1) 00:12:49.910 7.235 - 7.290: 99.8084% ( 1) 00:12:49.910 7.290 - 7.346: 99.8146% ( 1) 00:12:49.910 7.346 - 7.402: 99.8270% ( 2) 00:12:49.910 7.402 - 7.457: 99.8332% ( 1) 00:12:49.910 7.457 - 7.513: 99.8393% ( 1) 00:12:49.910 7.680 - 7.736: 99.8517% ( 2) 00:12:49.910 8.014 - 8.070: 99.8579% ( 1) 00:12:49.910 8.626 - 8.682: 99.8641% ( 1) 00:12:49.910 8.849 - 8.904: 99.8702% ( 1) 00:12:49.910 10.741 - 10.797: 99.8764% ( 1) 00:12:49.910 15.026 - 15.137: 99.8826% ( 1) 00:12:49.910 3989.148 - 4017.642: 99.9938% ( 18) 00:12:49.910 5983.722 - 6012.216: 100.0000% ( 1) 00:12:49.910 00:12:49.910 Complete histogram 00:12:49.910 ================== 00:12:49.910 Range in us Cumulative Count 00:12:49.910 1.837 - [2024-07-12 14:17:41.612433] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:49.910 1.850: 0.3893% ( 63) 00:12:49.910 1.850 - 1.864: 22.4495% ( 3570) 00:12:49.910 1.864 - 1.878: 76.4753% ( 8743) 00:12:49.910 1.878 - 1.892: 90.5333% ( 2275) 00:12:49.910 1.892 - 1.906: 93.8330% ( 534) 00:12:49.910 1.906 - 1.920: 94.8897% ( 171) 00:12:49.910 1.920 - 1.934: 95.9587% ( 173) 00:12:49.910 1.934 - 1.948: 97.5653% ( 260) 00:12:49.910 1.948 - 1.962: 98.6529% ( 176) 00:12:49.910 1.962 - 1.976: 99.0113% ( 58) 00:12:49.910 1.976 - 1.990: 99.1596% ( 24) 00:12:49.910 1.990 - 2.003: 99.2152% ( 9) 00:12:49.910 2.003 - 2.017: 99.2338% ( 3) 00:12:49.910 2.017 - 2.031: 99.2647% ( 5) 00:12:49.910 2.031 - 2.045: 99.2956% ( 5) 00:12:49.910 2.045 - 2.059: 99.3141% ( 3) 00:12:49.910 2.059 - 2.073: 99.3265% ( 2) 00:12:49.910 2.073 - 2.087: 99.3388% ( 2) 00:12:49.910 2.087 - 2.101: 99.3450% ( 1) 00:12:49.910 2.101 - 2.115: 99.3512% ( 1) 00:12:49.910 2.240 - 2.254: 99.3574% ( 1) 00:12:49.910 2.310 - 2.323: 99.3635% ( 1) 00:12:49.910 2.435 - 2.449: 99.3697% ( 1) 00:12:49.910 3.673 - 3.701: 99.3821% ( 2) 00:12:49.910 3.729 - 3.757: 99.3882% ( 1) 00:12:49.910 3.840 - 3.868: 99.3944% ( 1) 00:12:49.910 4.230 - 4.257: 99.4006% ( 1) 00:12:49.910 4.313 - 4.341: 99.4130% ( 2) 00:12:49.910 4.508 - 4.536: 99.4191% ( 1) 00:12:49.910 4.591 - 4.619: 99.4253% ( 1) 00:12:49.910 4.619 - 4.647: 99.4315% ( 1) 00:12:49.910 4.647 - 4.675: 99.4377% ( 1) 00:12:49.910 4.786 - 4.814: 99.4439% ( 1) 00:12:49.910 4.842 - 4.870: 99.4500% ( 1) 00:12:49.910 4.870 - 4.897: 99.4562% ( 1) 00:12:49.910 5.037 - 5.064: 99.4624% ( 1) 00:12:49.910 5.064 - 5.092: 99.4686% ( 1) 00:12:49.910 5.120 - 5.148: 99.4748% ( 1) 00:12:49.910 5.148 - 5.176: 99.4809% ( 1) 00:12:49.910 5.176 - 5.203: 99.4871% ( 1) 00:12:49.910 5.203 - 5.231: 99.4933% ( 1) 00:12:49.910 5.398 - 5.426: 99.4995% ( 1) 00:12:49.910 5.510 - 5.537: 99.5057% ( 1) 00:12:49.910 5.732 - 5.760: 99.5118% ( 1) 00:12:49.910 5.760 - 5.788: 99.5242% ( 2) 00:12:49.910 5.816 - 5.843: 99.5304% ( 1) 00:12:49.910 6.205 - 6.233: 99.5366% ( 1) 00:12:49.910 6.400 - 6.428: 99.5427% ( 1) 00:12:49.910 7.179 - 7.235: 99.5489% ( 1) 00:12:49.910 9.850 - 9.906: 99.5551% ( 1) 00:12:49.910 10.518 - 10.574: 99.5613% ( 1) 00:12:49.910 40.070 - 40.292: 99.5674% ( 1) 00:12:49.910 147.812 - 148.703: 99.5736% ( 1) 00:12:49.910 2008.821 - 2023.068: 99.5798% ( 1) 00:12:49.910 3989.148 - 4017.642: 99.9938% ( 67) 00:12:49.910 5983.722 - 6012.216: 100.0000% ( 1) 00:12:49.910 00:12:49.910 14:17:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:49.910 14:17:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:49.910 14:17:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:49.910 14:17:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:49.910 14:17:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:49.910 [ 00:12:49.910 { 00:12:49.910 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:49.910 "subtype": "Discovery", 00:12:49.910 "listen_addresses": [], 00:12:49.910 "allow_any_host": true, 00:12:49.910 "hosts": [] 00:12:49.910 }, 00:12:49.910 { 00:12:49.910 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:49.910 "subtype": "NVMe", 00:12:49.910 "listen_addresses": [ 00:12:49.910 { 00:12:49.910 "trtype": "VFIOUSER", 00:12:49.910 "adrfam": "IPv4", 00:12:49.910 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:49.910 "trsvcid": "0" 00:12:49.910 } 00:12:49.910 ], 00:12:49.910 "allow_any_host": true, 00:12:49.910 "hosts": [], 00:12:49.910 "serial_number": "SPDK1", 00:12:49.910 "model_number": "SPDK bdev Controller", 00:12:49.910 "max_namespaces": 32, 00:12:49.910 "min_cntlid": 1, 00:12:49.910 "max_cntlid": 65519, 00:12:49.910 "namespaces": [ 00:12:49.910 { 00:12:49.910 "nsid": 1, 00:12:49.910 "bdev_name": "Malloc1", 00:12:49.910 "name": "Malloc1", 00:12:49.910 "nguid": "E4620C97A459475FA6F3A0F37B46E4FD", 00:12:49.910 "uuid": "e4620c97-a459-475f-a6f3-a0f37b46e4fd" 00:12:49.910 }, 00:12:49.910 { 00:12:49.910 "nsid": 2, 00:12:49.910 "bdev_name": "Malloc3", 00:12:49.910 "name": "Malloc3", 00:12:49.910 "nguid": "2CF79F137D4B483B8CB91D4B9D7E1EA8", 00:12:49.910 "uuid": "2cf79f13-7d4b-483b-8cb9-1d4b9d7e1ea8" 00:12:49.910 } 00:12:49.910 ] 00:12:49.910 }, 00:12:49.910 { 00:12:49.910 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:49.910 "subtype": "NVMe", 00:12:49.910 "listen_addresses": [ 00:12:49.910 { 00:12:49.910 "trtype": "VFIOUSER", 00:12:49.910 "adrfam": "IPv4", 00:12:49.910 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:49.910 "trsvcid": "0" 00:12:49.910 } 00:12:49.910 ], 00:12:49.910 "allow_any_host": true, 00:12:49.910 "hosts": [], 00:12:49.910 "serial_number": "SPDK2", 00:12:49.910 "model_number": "SPDK bdev Controller", 00:12:49.910 "max_namespaces": 32, 00:12:49.910 "min_cntlid": 1, 00:12:49.910 "max_cntlid": 65519, 00:12:49.910 "namespaces": [ 00:12:49.910 { 00:12:49.910 "nsid": 1, 00:12:49.910 "bdev_name": "Malloc2", 00:12:49.910 "name": "Malloc2", 00:12:49.910 "nguid": "588B11A68AC949458FBBEDE15C6DB314", 00:12:49.910 "uuid": "588b11a6-8ac9-4945-8fbb-ede15c6db314" 00:12:49.910 } 00:12:49.910 ] 00:12:49.910 } 00:12:49.910 ] 00:12:49.910 14:17:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:49.910 14:17:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2475448 00:12:49.910 14:17:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:49.910 14:17:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:49.910 14:17:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:49.910 14:17:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:49.910 14:17:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:49.910 14:17:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:49.910 14:17:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:49.910 14:17:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:49.910 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.169 [2024-07-12 14:17:41.991912] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:50.169 Malloc4 00:12:50.169 14:17:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:50.429 [2024-07-12 14:17:42.216652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:50.429 14:17:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:50.429 Asynchronous Event Request test 00:12:50.429 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:50.429 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:50.429 Registering asynchronous event callbacks... 00:12:50.429 Starting namespace attribute notice tests for all controllers... 00:12:50.429 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:50.429 aer_cb - Changed Namespace 00:12:50.429 Cleaning up... 00:12:50.429 [ 00:12:50.429 { 00:12:50.429 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:50.429 "subtype": "Discovery", 00:12:50.429 "listen_addresses": [], 00:12:50.429 "allow_any_host": true, 00:12:50.429 "hosts": [] 00:12:50.429 }, 00:12:50.429 { 00:12:50.429 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:50.429 "subtype": "NVMe", 00:12:50.429 "listen_addresses": [ 00:12:50.429 { 00:12:50.429 "trtype": "VFIOUSER", 00:12:50.429 "adrfam": "IPv4", 00:12:50.429 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:50.429 "trsvcid": "0" 00:12:50.429 } 00:12:50.429 ], 00:12:50.429 "allow_any_host": true, 00:12:50.429 "hosts": [], 00:12:50.429 "serial_number": "SPDK1", 00:12:50.429 "model_number": "SPDK bdev Controller", 00:12:50.429 "max_namespaces": 32, 00:12:50.429 "min_cntlid": 1, 00:12:50.429 "max_cntlid": 65519, 00:12:50.429 "namespaces": [ 00:12:50.429 { 00:12:50.429 "nsid": 1, 00:12:50.429 "bdev_name": "Malloc1", 00:12:50.429 "name": "Malloc1", 00:12:50.429 "nguid": "E4620C97A459475FA6F3A0F37B46E4FD", 00:12:50.429 "uuid": "e4620c97-a459-475f-a6f3-a0f37b46e4fd" 00:12:50.429 }, 00:12:50.429 { 00:12:50.429 "nsid": 2, 00:12:50.429 "bdev_name": "Malloc3", 00:12:50.429 "name": "Malloc3", 00:12:50.429 "nguid": "2CF79F137D4B483B8CB91D4B9D7E1EA8", 00:12:50.429 "uuid": "2cf79f13-7d4b-483b-8cb9-1d4b9d7e1ea8" 00:12:50.429 } 00:12:50.429 ] 00:12:50.429 }, 00:12:50.429 { 00:12:50.429 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:50.429 "subtype": "NVMe", 00:12:50.429 "listen_addresses": [ 00:12:50.429 { 00:12:50.429 "trtype": "VFIOUSER", 00:12:50.429 "adrfam": "IPv4", 00:12:50.429 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:50.429 "trsvcid": "0" 00:12:50.429 } 00:12:50.429 ], 00:12:50.429 "allow_any_host": true, 00:12:50.429 "hosts": [], 00:12:50.429 "serial_number": "SPDK2", 00:12:50.429 "model_number": "SPDK bdev Controller", 00:12:50.429 "max_namespaces": 32, 00:12:50.429 "min_cntlid": 1, 00:12:50.429 "max_cntlid": 65519, 00:12:50.429 "namespaces": [ 00:12:50.429 { 00:12:50.429 "nsid": 1, 00:12:50.429 "bdev_name": "Malloc2", 00:12:50.429 "name": "Malloc2", 00:12:50.429 "nguid": "588B11A68AC949458FBBEDE15C6DB314", 00:12:50.429 "uuid": "588b11a6-8ac9-4945-8fbb-ede15c6db314" 00:12:50.429 }, 00:12:50.429 { 00:12:50.429 "nsid": 2, 00:12:50.429 "bdev_name": "Malloc4", 00:12:50.429 "name": "Malloc4", 00:12:50.429 "nguid": "FDB0EAB14BAD42938114F9E4A72F7461", 00:12:50.429 "uuid": "fdb0eab1-4bad-4293-8114-f9e4a72f7461" 00:12:50.429 } 00:12:50.429 ] 00:12:50.429 } 00:12:50.429 ] 00:12:50.429 14:17:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2475448 00:12:50.429 14:17:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:50.429 14:17:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2467676 00:12:50.429 14:17:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2467676 ']' 00:12:50.429 14:17:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2467676 00:12:50.429 14:17:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:50.429 14:17:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:50.429 14:17:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2467676 00:12:50.688 14:17:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:50.688 14:17:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:50.688 14:17:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2467676' 00:12:50.688 killing process with pid 2467676 00:12:50.688 14:17:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2467676 00:12:50.688 14:17:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2467676 00:12:50.948 14:17:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:50.948 14:17:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:50.948 14:17:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:50.948 14:17:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:50.948 14:17:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:50.948 14:17:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2475566 00:12:50.948 14:17:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2475566' 00:12:50.948 Process pid: 2475566 00:12:50.948 14:17:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:50.948 14:17:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:50.948 14:17:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2475566 00:12:50.948 14:17:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2475566 ']' 00:12:50.948 14:17:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.948 14:17:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:50.948 14:17:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.948 14:17:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:50.948 14:17:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:50.948 [2024-07-12 14:17:42.778143] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:50.948 [2024-07-12 14:17:42.778977] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:12:50.948 [2024-07-12 14:17:42.779016] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.948 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.948 [2024-07-12 14:17:42.831935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.948 [2024-07-12 14:17:42.911476] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.948 [2024-07-12 14:17:42.911513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.948 [2024-07-12 14:17:42.911520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.948 [2024-07-12 14:17:42.911526] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.948 [2024-07-12 14:17:42.911531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.948 [2024-07-12 14:17:42.911568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.948 [2024-07-12 14:17:42.911666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.948 [2024-07-12 14:17:42.911759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.948 [2024-07-12 14:17:42.911761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.207 [2024-07-12 14:17:42.989333] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:51.207 [2024-07-12 14:17:42.989447] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:51.207 [2024-07-12 14:17:42.989662] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:51.207 [2024-07-12 14:17:42.989988] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:51.207 [2024-07-12 14:17:42.990237] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:51.774 14:17:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:51.774 14:17:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:51.774 14:17:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:52.711 14:17:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:52.969 14:17:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:52.969 14:17:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:52.969 14:17:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:52.969 14:17:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:52.969 14:17:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:52.969 Malloc1 00:12:52.969 14:17:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:53.228 14:17:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:53.486 14:17:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:53.486 14:17:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:53.486 14:17:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:53.486 14:17:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:53.744 Malloc2 00:12:53.744 14:17:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:54.002 14:17:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:54.261 14:17:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:54.261 14:17:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:54.261 14:17:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2475566 00:12:54.261 14:17:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2475566 ']' 00:12:54.261 14:17:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2475566 00:12:54.261 14:17:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:54.261 14:17:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:54.261 14:17:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2475566 00:12:54.261 14:17:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:54.261 14:17:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:54.261 14:17:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2475566' 00:12:54.261 killing process with pid 2475566 00:12:54.261 14:17:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2475566 00:12:54.261 14:17:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2475566 00:12:54.520 14:17:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:54.520 14:17:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:54.520 00:12:54.520 real 0m51.282s 00:12:54.520 user 3m23.154s 00:12:54.520 sys 0m3.536s 00:12:54.520 14:17:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:54.520 14:17:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:54.520 ************************************ 00:12:54.520 END TEST nvmf_vfio_user 00:12:54.520 ************************************ 00:12:54.520 14:17:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:54.520 14:17:46 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:54.520 14:17:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:54.520 14:17:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.520 14:17:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:54.778 ************************************ 00:12:54.778 START TEST nvmf_vfio_user_nvme_compliance 00:12:54.778 ************************************ 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:54.778 * Looking for test storage... 00:12:54.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2476316 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2476316' 00:12:54.778 Process pid: 2476316 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2476316 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2476316 ']' 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.778 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:54.779 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.779 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:54.779 14:17:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:54.779 [2024-07-12 14:17:46.715264] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:12:54.779 [2024-07-12 14:17:46.715307] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.779 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.779 [2024-07-12 14:17:46.768185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:55.037 [2024-07-12 14:17:46.841299] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.037 [2024-07-12 14:17:46.841338] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.037 [2024-07-12 14:17:46.841345] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.037 [2024-07-12 14:17:46.841352] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.037 [2024-07-12 14:17:46.841357] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.037 [2024-07-12 14:17:46.841404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.037 [2024-07-12 14:17:46.841502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.037 [2024-07-12 14:17:46.841502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.604 14:17:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:55.604 14:17:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:55.604 14:17:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:56.541 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:56.541 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:56.541 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:56.541 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.541 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:56.541 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.541 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:56.541 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:56.541 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.541 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:56.800 malloc0 00:12:56.800 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.800 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:56.800 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.800 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:56.800 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.800 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:56.800 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.800 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:56.800 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.800 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:56.800 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.800 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:56.800 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.800 14:17:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:56.800 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.800 00:12:56.800 00:12:56.800 CUnit - A unit testing framework for C - Version 2.1-3 00:12:56.800 http://cunit.sourceforge.net/ 00:12:56.800 00:12:56.800 00:12:56.800 Suite: nvme_compliance 00:12:56.800 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-12 14:17:48.747868] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.800 [2024-07-12 14:17:48.749191] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:56.800 [2024-07-12 14:17:48.749207] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:56.800 [2024-07-12 14:17:48.749213] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:56.800 [2024-07-12 14:17:48.751891] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.800 passed 00:12:57.059 Test: admin_identify_ctrlr_verify_fused ...[2024-07-12 14:17:48.830441] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.059 [2024-07-12 14:17:48.833458] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.059 passed 00:12:57.059 Test: admin_identify_ns ...[2024-07-12 14:17:48.914863] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.059 [2024-07-12 14:17:48.975386] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:57.059 [2024-07-12 14:17:48.983400] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:57.059 [2024-07-12 14:17:49.004487] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.059 passed 00:12:57.318 Test: admin_get_features_mandatory_features ...[2024-07-12 14:17:49.080664] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.319 [2024-07-12 14:17:49.083686] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.319 passed 00:12:57.319 Test: admin_get_features_optional_features ...[2024-07-12 14:17:49.163220] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.319 [2024-07-12 14:17:49.166241] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.319 passed 00:12:57.319 Test: admin_set_features_number_of_queues ...[2024-07-12 14:17:49.244829] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.577 [2024-07-12 14:17:49.350469] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.577 passed 00:12:57.577 Test: admin_get_log_page_mandatory_logs ...[2024-07-12 14:17:49.427656] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.577 [2024-07-12 14:17:49.430677] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.577 passed 00:12:57.577 Test: admin_get_log_page_with_lpo ...[2024-07-12 14:17:49.504566] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.577 [2024-07-12 14:17:49.576389] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:57.836 [2024-07-12 14:17:49.589435] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.836 passed 00:12:57.836 Test: fabric_property_get ...[2024-07-12 14:17:49.665559] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.836 [2024-07-12 14:17:49.666785] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:57.836 [2024-07-12 14:17:49.668574] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.836 passed 00:12:57.836 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-12 14:17:49.746059] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.836 [2024-07-12 14:17:49.747296] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:57.836 [2024-07-12 14:17:49.749078] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.836 passed 00:12:57.836 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-12 14:17:49.828002] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.096 [2024-07-12 14:17:49.912382] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:58.096 [2024-07-12 14:17:49.928386] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:58.096 [2024-07-12 14:17:49.933479] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.096 passed 00:12:58.096 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-12 14:17:50.010717] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.096 [2024-07-12 14:17:50.011943] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:58.096 [2024-07-12 14:17:50.013739] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.096 passed 00:12:58.096 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-12 14:17:50.089932] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.355 [2024-07-12 14:17:50.169392] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:58.355 [2024-07-12 14:17:50.193397] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:58.355 [2024-07-12 14:17:50.198533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.355 passed 00:12:58.355 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-12 14:17:50.274926] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.355 [2024-07-12 14:17:50.276171] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:58.355 [2024-07-12 14:17:50.276194] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:58.355 [2024-07-12 14:17:50.279958] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.355 passed 00:12:58.355 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-12 14:17:50.356970] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.614 [2024-07-12 14:17:50.448395] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:58.614 [2024-07-12 14:17:50.456384] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:58.614 [2024-07-12 14:17:50.464395] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:58.614 [2024-07-12 14:17:50.472388] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:58.614 [2024-07-12 14:17:50.501455] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.614 passed 00:12:58.614 Test: admin_create_io_sq_verify_pc ...[2024-07-12 14:17:50.579581] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.614 [2024-07-12 14:17:50.598394] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:58.614 [2024-07-12 14:17:50.615768] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.873 passed 00:12:58.873 Test: admin_create_io_qp_max_qps ...[2024-07-12 14:17:50.696319] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.811 [2024-07-12 14:17:51.790387] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:00.379 [2024-07-12 14:17:52.193602] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:00.379 passed 00:13:00.379 Test: admin_create_io_sq_shared_cq ...[2024-07-12 14:17:52.265884] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:00.639 [2024-07-12 14:17:52.397386] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:00.639 [2024-07-12 14:17:52.434443] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:00.639 passed 00:13:00.639 00:13:00.639 Run Summary: Type Total Ran Passed Failed Inactive 00:13:00.639 suites 1 1 n/a 0 0 00:13:00.639 tests 18 18 18 0 0 00:13:00.639 asserts 360 360 360 0 n/a 00:13:00.639 00:13:00.639 Elapsed time = 1.520 seconds 00:13:00.639 14:17:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2476316 00:13:00.639 14:17:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2476316 ']' 00:13:00.639 14:17:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2476316 00:13:00.639 14:17:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:00.639 14:17:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:00.639 14:17:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2476316 00:13:00.639 14:17:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:00.639 14:17:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:00.639 14:17:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2476316' 00:13:00.639 killing process with pid 2476316 00:13:00.639 14:17:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2476316 00:13:00.639 14:17:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2476316 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:00.898 00:13:00.898 real 0m6.176s 00:13:00.898 user 0m17.701s 00:13:00.898 sys 0m0.455s 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:00.898 ************************************ 00:13:00.898 END TEST nvmf_vfio_user_nvme_compliance 00:13:00.898 ************************************ 00:13:00.898 14:17:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:00.898 14:17:52 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:00.898 14:17:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:00.898 14:17:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.898 14:17:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:00.898 ************************************ 00:13:00.898 START TEST nvmf_vfio_user_fuzz 00:13:00.898 ************************************ 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:00.898 * Looking for test storage... 00:13:00.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.898 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2477411 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2477411' 00:13:01.159 Process pid: 2477411 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2477411 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2477411 ']' 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:01.159 14:17:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:02.096 14:17:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.096 14:17:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:02.097 14:17:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:03.033 malloc0 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:03.033 14:17:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:35.151 Fuzzing completed. Shutting down the fuzz application 00:13:35.151 00:13:35.151 Dumping successful admin opcodes: 00:13:35.151 8, 9, 10, 24, 00:13:35.151 Dumping successful io opcodes: 00:13:35.151 0, 00:13:35.151 NS: 0x200003a1ef00 I/O qp, Total commands completed: 986095, total successful commands: 3866, random_seed: 540620800 00:13:35.151 NS: 0x200003a1ef00 admin qp, Total commands completed: 243210, total successful commands: 1957, random_seed: 395577984 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2477411 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2477411 ']' 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2477411 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2477411 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2477411' 00:13:35.151 killing process with pid 2477411 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2477411 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2477411 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:35.151 00:13:35.151 real 0m32.739s 00:13:35.151 user 0m31.175s 00:13:35.151 sys 0m29.816s 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:35.151 14:18:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:35.151 ************************************ 00:13:35.151 END TEST nvmf_vfio_user_fuzz 00:13:35.151 ************************************ 00:13:35.151 14:18:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:35.151 14:18:25 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:35.151 14:18:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:35.151 14:18:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.151 14:18:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:35.151 ************************************ 00:13:35.151 START TEST nvmf_host_management 00:13:35.151 ************************************ 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:35.151 * Looking for test storage... 00:13:35.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:35.151 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:35.152 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.152 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:35.152 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:35.152 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:35.152 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.152 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.152 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.152 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:35.152 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:35.152 14:18:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:35.152 14:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:39.392 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:39.393 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:39.393 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:39.393 Found net devices under 0000:86:00.0: cvl_0_0 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:39.393 Found net devices under 0000:86:00.1: cvl_0_1 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:39.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:13:39.393 00:13:39.393 --- 10.0.0.2 ping statistics --- 00:13:39.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.393 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:39.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:13:39.393 00:13:39.393 --- 10.0.0.1 ping statistics --- 00:13:39.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.393 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:39.393 14:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:39.393 14:18:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:39.393 14:18:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:39.393 14:18:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:39.393 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:39.393 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:39.393 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.393 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2486340 00:13:39.393 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2486340 00:13:39.393 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:39.393 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2486340 ']' 00:13:39.393 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.393 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.393 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.393 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.393 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.393 [2024-07-12 14:18:31.080313] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:13:39.393 [2024-07-12 14:18:31.080357] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.393 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.393 [2024-07-12 14:18:31.138299] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.394 [2024-07-12 14:18:31.219080] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.394 [2024-07-12 14:18:31.219115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.394 [2024-07-12 14:18:31.219122] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.394 [2024-07-12 14:18:31.219128] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.394 [2024-07-12 14:18:31.219133] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.394 [2024-07-12 14:18:31.219228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.394 [2024-07-12 14:18:31.219313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.394 [2024-07-12 14:18:31.219421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.394 [2024-07-12 14:18:31.219422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.963 [2024-07-12 14:18:31.936262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.963 14:18:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:40.223 Malloc0 00:13:40.223 [2024-07-12 14:18:31.996032] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2486604 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2486604 /var/tmp/bdevperf.sock 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2486604 ']' 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:40.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:40.223 { 00:13:40.223 "params": { 00:13:40.223 "name": "Nvme$subsystem", 00:13:40.223 "trtype": "$TEST_TRANSPORT", 00:13:40.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:40.223 "adrfam": "ipv4", 00:13:40.223 "trsvcid": "$NVMF_PORT", 00:13:40.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:40.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:40.223 "hdgst": ${hdgst:-false}, 00:13:40.223 "ddgst": ${ddgst:-false} 00:13:40.223 }, 00:13:40.223 "method": "bdev_nvme_attach_controller" 00:13:40.223 } 00:13:40.223 EOF 00:13:40.223 )") 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:40.223 14:18:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:40.223 "params": { 00:13:40.223 "name": "Nvme0", 00:13:40.223 "trtype": "tcp", 00:13:40.223 "traddr": "10.0.0.2", 00:13:40.223 "adrfam": "ipv4", 00:13:40.223 "trsvcid": "4420", 00:13:40.223 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:40.223 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:40.223 "hdgst": false, 00:13:40.223 "ddgst": false 00:13:40.223 }, 00:13:40.223 "method": "bdev_nvme_attach_controller" 00:13:40.223 }' 00:13:40.223 [2024-07-12 14:18:32.087828] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:13:40.223 [2024-07-12 14:18:32.087871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486604 ] 00:13:40.223 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.223 [2024-07-12 14:18:32.142272] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.223 [2024-07-12 14:18:32.215446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.482 Running I/O for 10 seconds... 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=981 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 981 -ge 100 ']' 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:41.052 [2024-07-12 14:18:32.975281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122e460 is same with the state(5) to be set 00:13:41.052 [2024-07-12 14:18:32.975325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122e460 is same with the state(5) to be set 00:13:41.052 [2024-07-12 14:18:32.975332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122e460 is same with the state(5) to be set 00:13:41.052 [2024-07-12 14:18:32.975339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122e460 is same with the state(5) to be set 00:13:41.052 [2024-07-12 14:18:32.975345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122e460 is same with the state(5) to be set 00:13:41.052 [2024-07-12 14:18:32.975351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122e460 is same with the state(5) to be set 00:13:41.052 [2024-07-12 14:18:32.975357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122e460 is same with the state(5) to be set 00:13:41.052 [2024-07-12 14:18:32.975363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122e460 is same with the state(5) to be set 00:13:41.052 [2024-07-12 14:18:32.975369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122e460 is same with the state(5) to be set 00:13:41.052 [2024-07-12 14:18:32.975374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122e460 is same with the state(5) to be set 00:13:41.052 [2024-07-12 14:18:32.975385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122e460 is same with the state(5) to be set 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:41.052 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.053 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:41.053 [2024-07-12 14:18:32.981811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.053 [2024-07-12 14:18:32.981845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.981854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.053 [2024-07-12 14:18:32.981862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.981871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.053 [2024-07-12 14:18:32.981884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.981893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.053 [2024-07-12 14:18:32.981902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.981909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd4b980 is same with the state(5) to be set 00:13:41.053 [2024-07-12 14:18:32.982236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.053 [2024-07-12 14:18:32.982733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.053 [2024-07-12 14:18:32.982743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.982751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.982760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.982768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.982777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.982785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.982795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.982803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.982812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.982820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.982829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.982837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.982846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.982854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.982863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.982872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.982881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.982891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.982900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.982909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.982918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.982926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.982935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.982944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.982953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.982962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.982971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.982979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.982988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.982996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.054 [2024-07-12 14:18:32.983291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.054 [2024-07-12 14:18:32.983300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.055 [2024-07-12 14:18:32.983310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.055 [2024-07-12 14:18:32.983318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.055 [2024-07-12 14:18:32.983327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.055 [2024-07-12 14:18:32.983337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.055 [2024-07-12 14:18:32.983346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.055 [2024-07-12 14:18:32.983354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.055 [2024-07-12 14:18:32.983364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.055 [2024-07-12 14:18:32.983372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.055 [2024-07-12 14:18:32.983385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.055 [2024-07-12 14:18:32.983394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.055 [2024-07-12 14:18:32.983455] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x115cb20 was disconnected and freed. reset controller. 00:13:41.055 [2024-07-12 14:18:32.984352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:41.055 task offset: 8192 on job bdev=Nvme0n1 fails 00:13:41.055 00:13:41.055 Latency(us) 00:13:41.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.055 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:41.055 Job: Nvme0n1 ended in about 0.57 seconds with error 00:13:41.055 Verification LBA range: start 0x0 length 0x400 00:13:41.055 Nvme0n1 : 0.57 1902.10 118.88 111.89 0.00 31100.75 1966.08 27582.11 00:13:41.055 =================================================================================================================== 00:13:41.055 Total : 1902.10 118.88 111.89 0.00 31100.75 1966.08 27582.11 00:13:41.055 [2024-07-12 14:18:32.985934] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:41.055 [2024-07-12 14:18:32.985950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4b980 (9): Bad file descriptor 00:13:41.055 [2024-07-12 14:18:32.990800] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:41.055 14:18:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.055 14:18:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:41.991 14:18:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2486604 00:13:41.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2486604) - No such process 00:13:41.991 14:18:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:41.992 14:18:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:42.251 14:18:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:42.251 14:18:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:42.251 14:18:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:42.251 14:18:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:42.251 14:18:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:42.251 14:18:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:42.251 { 00:13:42.251 "params": { 00:13:42.251 "name": "Nvme$subsystem", 00:13:42.251 "trtype": "$TEST_TRANSPORT", 00:13:42.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.251 "adrfam": "ipv4", 00:13:42.251 "trsvcid": "$NVMF_PORT", 00:13:42.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.251 "hdgst": ${hdgst:-false}, 00:13:42.251 "ddgst": ${ddgst:-false} 00:13:42.251 }, 00:13:42.251 "method": "bdev_nvme_attach_controller" 00:13:42.251 } 00:13:42.251 EOF 00:13:42.251 )") 00:13:42.251 14:18:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:42.251 14:18:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:42.251 14:18:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:42.251 14:18:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:42.251 "params": { 00:13:42.251 "name": "Nvme0", 00:13:42.251 "trtype": "tcp", 00:13:42.251 "traddr": "10.0.0.2", 00:13:42.251 "adrfam": "ipv4", 00:13:42.251 "trsvcid": "4420", 00:13:42.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:42.251 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:42.251 "hdgst": false, 00:13:42.251 "ddgst": false 00:13:42.251 }, 00:13:42.251 "method": "bdev_nvme_attach_controller" 00:13:42.251 }' 00:13:42.251 [2024-07-12 14:18:34.046198] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:13:42.251 [2024-07-12 14:18:34.046244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486856 ] 00:13:42.251 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.251 [2024-07-12 14:18:34.100653] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.251 [2024-07-12 14:18:34.171165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.510 Running I/O for 1 seconds... 00:13:43.447 00:13:43.447 Latency(us) 00:13:43.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.447 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:43.447 Verification LBA range: start 0x0 length 0x400 00:13:43.447 Nvme0n1 : 1.01 1969.82 123.11 0.00 0.00 31977.88 6667.58 27240.18 00:13:43.447 =================================================================================================================== 00:13:43.447 Total : 1969.82 123.11 0.00 0.00 31977.88 6667.58 27240.18 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:43.713 rmmod nvme_tcp 00:13:43.713 rmmod nvme_fabrics 00:13:43.713 rmmod nvme_keyring 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2486340 ']' 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2486340 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2486340 ']' 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2486340 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2486340 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2486340' 00:13:43.713 killing process with pid 2486340 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2486340 00:13:43.713 14:18:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2486340 00:13:43.972 [2024-07-12 14:18:35.814587] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:43.972 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:43.972 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:43.972 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:43.972 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.972 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:43.972 14:18:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.972 14:18:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.972 14:18:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.509 14:18:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:46.509 14:18:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:46.509 00:13:46.509 real 0m12.297s 00:13:46.509 user 0m22.345s 00:13:46.509 sys 0m5.073s 00:13:46.509 14:18:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:46.509 14:18:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:46.509 ************************************ 00:13:46.509 END TEST nvmf_host_management 00:13:46.509 ************************************ 00:13:46.509 14:18:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:46.509 14:18:37 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:46.509 14:18:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:46.509 14:18:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.509 14:18:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:46.509 ************************************ 00:13:46.509 START TEST nvmf_lvol 00:13:46.509 ************************************ 00:13:46.509 14:18:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:46.509 * Looking for test storage... 00:13:46.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:46.509 14:18:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:46.510 14:18:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:51.785 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:51.785 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:51.785 Found net devices under 0000:86:00.0: cvl_0_0 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.785 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:51.786 Found net devices under 0000:86:00.1: cvl_0_1 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:51.786 14:18:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:51.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:13:51.786 00:13:51.786 --- 10.0.0.2 ping statistics --- 00:13:51.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.786 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:13:51.786 00:13:51.786 --- 10.0.0.1 ping statistics --- 00:13:51.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.786 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2490606 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2490606 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2490606 ']' 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.786 14:18:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:51.786 [2024-07-12 14:18:43.166582] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:13:51.786 [2024-07-12 14:18:43.166626] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.786 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.786 [2024-07-12 14:18:43.223480] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:51.786 [2024-07-12 14:18:43.303150] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.786 [2024-07-12 14:18:43.303184] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.786 [2024-07-12 14:18:43.303192] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.786 [2024-07-12 14:18:43.303199] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.786 [2024-07-12 14:18:43.303204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.786 [2024-07-12 14:18:43.303240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.786 [2024-07-12 14:18:43.303335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.786 [2024-07-12 14:18:43.303337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.044 14:18:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.044 14:18:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:52.044 14:18:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.044 14:18:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:52.044 14:18:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:52.044 14:18:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.044 14:18:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:52.302 [2024-07-12 14:18:44.168354] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.302 14:18:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:52.560 14:18:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:52.560 14:18:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:52.817 14:18:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:52.817 14:18:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:52.817 14:18:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:53.075 14:18:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=22c2ed65-959f-4e2d-87b1-ffdeacfd5af5 00:13:53.075 14:18:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 22c2ed65-959f-4e2d-87b1-ffdeacfd5af5 lvol 20 00:13:53.334 14:18:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3a53ffe4-4ef0-478c-8d89-f207a9b449d1 00:13:53.334 14:18:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:53.334 14:18:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3a53ffe4-4ef0-478c-8d89-f207a9b449d1 00:13:53.592 14:18:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:53.851 [2024-07-12 14:18:45.661006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.851 14:18:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:54.110 14:18:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2491105 00:13:54.110 14:18:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:54.110 14:18:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:54.110 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.046 14:18:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3a53ffe4-4ef0-478c-8d89-f207a9b449d1 MY_SNAPSHOT 00:13:55.304 14:18:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8575258b-47cf-4e72-add0-355048c84dfc 00:13:55.304 14:18:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3a53ffe4-4ef0-478c-8d89-f207a9b449d1 30 00:13:55.562 14:18:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8575258b-47cf-4e72-add0-355048c84dfc MY_CLONE 00:13:55.562 14:18:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=53e36779-8e3e-450d-a6cd-cd87612c03eb 00:13:55.562 14:18:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 53e36779-8e3e-450d-a6cd-cd87612c03eb 00:13:56.127 14:18:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2491105 00:14:04.237 Initializing NVMe Controllers 00:14:04.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:04.237 Controller IO queue size 128, less than required. 00:14:04.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:04.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:04.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:04.237 Initialization complete. Launching workers. 00:14:04.237 ======================================================== 00:14:04.237 Latency(us) 00:14:04.237 Device Information : IOPS MiB/s Average min max 00:14:04.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12618.10 49.29 10148.63 1293.88 65393.68 00:14:04.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12487.00 48.78 10250.08 3528.08 61791.77 00:14:04.237 ======================================================== 00:14:04.237 Total : 25105.10 98.07 10199.09 1293.88 65393.68 00:14:04.237 00:14:04.237 14:18:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:04.495 14:18:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3a53ffe4-4ef0-478c-8d89-f207a9b449d1 00:14:04.754 14:18:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 22c2ed65-959f-4e2d-87b1-ffdeacfd5af5 00:14:04.754 14:18:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:04.755 14:18:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:04.755 14:18:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:04.755 14:18:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:04.755 14:18:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:04.755 14:18:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:04.755 14:18:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:04.755 14:18:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:04.755 14:18:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:04.755 rmmod nvme_tcp 00:14:05.039 rmmod nvme_fabrics 00:14:05.039 rmmod nvme_keyring 00:14:05.039 14:18:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.039 14:18:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:05.039 14:18:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:05.039 14:18:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2490606 ']' 00:14:05.039 14:18:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2490606 00:14:05.039 14:18:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2490606 ']' 00:14:05.039 14:18:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2490606 00:14:05.039 14:18:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:05.039 14:18:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.039 14:18:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2490606 00:14:05.039 14:18:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:05.039 14:18:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:05.039 14:18:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2490606' 00:14:05.039 killing process with pid 2490606 00:14:05.039 14:18:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2490606 00:14:05.039 14:18:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2490606 00:14:05.299 14:18:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:05.299 14:18:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:05.299 14:18:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:05.299 14:18:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.299 14:18:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:05.299 14:18:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.299 14:18:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.299 14:18:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.205 14:18:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:07.205 00:14:07.205 real 0m21.171s 00:14:07.205 user 1m3.765s 00:14:07.205 sys 0m6.505s 00:14:07.205 14:18:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:07.205 14:18:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:07.205 ************************************ 00:14:07.205 END TEST nvmf_lvol 00:14:07.205 ************************************ 00:14:07.205 14:18:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:07.205 14:18:59 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:07.205 14:18:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:07.205 14:18:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.205 14:18:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:07.465 ************************************ 00:14:07.465 START TEST nvmf_lvs_grow 00:14:07.465 ************************************ 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:07.465 * Looking for test storage... 00:14:07.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:07.465 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:07.466 14:18:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:07.466 14:18:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.742 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:12.743 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:12.743 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:12.743 Found net devices under 0000:86:00.0: cvl_0_0 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:12.743 Found net devices under 0000:86:00.1: cvl_0_1 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:12.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:14:12.743 00:14:12.743 --- 10.0.0.2 ping statistics --- 00:14:12.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.743 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:12.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:14:12.743 00:14:12.743 --- 10.0.0.1 ping statistics --- 00:14:12.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.743 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2496242 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2496242 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2496242 ']' 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.743 14:19:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:12.743 [2024-07-12 14:19:04.640644] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:14:12.743 [2024-07-12 14:19:04.640686] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.743 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.743 [2024-07-12 14:19:04.697628] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.002 [2024-07-12 14:19:04.777022] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.002 [2024-07-12 14:19:04.777056] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.002 [2024-07-12 14:19:04.777064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.002 [2024-07-12 14:19:04.777070] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.002 [2024-07-12 14:19:04.777076] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.002 [2024-07-12 14:19:04.777098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.576 14:19:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.576 14:19:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:13.576 14:19:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.576 14:19:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.576 14:19:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:13.576 14:19:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.576 14:19:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:13.834 [2024-07-12 14:19:05.640103] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.834 14:19:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:13.834 14:19:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:13.834 14:19:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.834 14:19:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:13.835 ************************************ 00:14:13.835 START TEST lvs_grow_clean 00:14:13.835 ************************************ 00:14:13.835 14:19:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:13.835 14:19:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:13.835 14:19:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:13.835 14:19:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:13.835 14:19:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:13.835 14:19:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:13.835 14:19:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:13.835 14:19:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:13.835 14:19:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:13.835 14:19:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:14.093 14:19:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:14.093 14:19:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:14.093 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fd8e1c34-decc-4f5d-b736-e868c234f6dd 00:14:14.093 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd8e1c34-decc-4f5d-b736-e868c234f6dd 00:14:14.093 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:14.352 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:14.352 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:14.352 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fd8e1c34-decc-4f5d-b736-e868c234f6dd lvol 150 00:14:14.611 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=165c0530-b484-4899-9237-d55b91386298 00:14:14.611 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:14.611 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:14.611 [2024-07-12 14:19:06.583116] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:14.611 [2024-07-12 14:19:06.583165] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:14.611 true 00:14:14.611 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:14.611 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd8e1c34-decc-4f5d-b736-e868c234f6dd 00:14:14.870 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:14.870 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:15.129 14:19:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 165c0530-b484-4899-9237-d55b91386298 00:14:15.129 14:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:15.388 [2024-07-12 14:19:07.257142] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.388 14:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:15.647 14:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2496746 00:14:15.647 14:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:15.647 14:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:15.647 14:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2496746 /var/tmp/bdevperf.sock 00:14:15.647 14:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2496746 ']' 00:14:15.648 14:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:15.648 14:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.648 14:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:15.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:15.648 14:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.648 14:19:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:15.648 [2024-07-12 14:19:07.480012] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:14:15.648 [2024-07-12 14:19:07.480060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496746 ] 00:14:15.648 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.648 [2024-07-12 14:19:07.534310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.648 [2024-07-12 14:19:07.614515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.585 14:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.585 14:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:16.585 14:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:16.585 Nvme0n1 00:14:16.585 14:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:16.844 [ 00:14:16.844 { 00:14:16.844 "name": "Nvme0n1", 00:14:16.844 "aliases": [ 00:14:16.844 "165c0530-b484-4899-9237-d55b91386298" 00:14:16.844 ], 00:14:16.844 "product_name": "NVMe disk", 00:14:16.844 "block_size": 4096, 00:14:16.844 "num_blocks": 38912, 00:14:16.844 "uuid": "165c0530-b484-4899-9237-d55b91386298", 00:14:16.844 "assigned_rate_limits": { 00:14:16.844 "rw_ios_per_sec": 0, 00:14:16.844 "rw_mbytes_per_sec": 0, 00:14:16.844 "r_mbytes_per_sec": 0, 00:14:16.844 "w_mbytes_per_sec": 0 00:14:16.844 }, 00:14:16.844 "claimed": false, 00:14:16.844 "zoned": false, 00:14:16.844 "supported_io_types": { 00:14:16.844 "read": true, 00:14:16.844 "write": true, 00:14:16.844 "unmap": true, 00:14:16.844 "flush": true, 00:14:16.844 "reset": true, 00:14:16.844 "nvme_admin": true, 00:14:16.844 "nvme_io": true, 00:14:16.844 "nvme_io_md": false, 00:14:16.844 "write_zeroes": true, 00:14:16.844 "zcopy": false, 00:14:16.844 "get_zone_info": false, 00:14:16.844 "zone_management": false, 00:14:16.844 "zone_append": false, 00:14:16.844 "compare": true, 00:14:16.844 "compare_and_write": true, 00:14:16.844 "abort": true, 00:14:16.844 "seek_hole": false, 00:14:16.844 "seek_data": false, 00:14:16.844 "copy": true, 00:14:16.844 "nvme_iov_md": false 00:14:16.844 }, 00:14:16.844 "memory_domains": [ 00:14:16.844 { 00:14:16.844 "dma_device_id": "system", 00:14:16.844 "dma_device_type": 1 00:14:16.844 } 00:14:16.844 ], 00:14:16.844 "driver_specific": { 00:14:16.844 "nvme": [ 00:14:16.844 { 00:14:16.844 "trid": { 00:14:16.844 "trtype": "TCP", 00:14:16.844 "adrfam": "IPv4", 00:14:16.844 "traddr": "10.0.0.2", 00:14:16.844 "trsvcid": "4420", 00:14:16.844 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:16.844 }, 00:14:16.844 "ctrlr_data": { 00:14:16.844 "cntlid": 1, 00:14:16.844 "vendor_id": "0x8086", 00:14:16.844 "model_number": "SPDK bdev Controller", 00:14:16.844 "serial_number": "SPDK0", 00:14:16.844 "firmware_revision": "24.09", 00:14:16.844 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:16.844 "oacs": { 00:14:16.844 "security": 0, 00:14:16.844 "format": 0, 00:14:16.844 "firmware": 0, 00:14:16.844 "ns_manage": 0 00:14:16.844 }, 00:14:16.844 "multi_ctrlr": true, 00:14:16.844 "ana_reporting": false 00:14:16.844 }, 00:14:16.844 "vs": { 00:14:16.844 "nvme_version": "1.3" 00:14:16.844 }, 00:14:16.844 "ns_data": { 00:14:16.844 "id": 1, 00:14:16.844 "can_share": true 00:14:16.844 } 00:14:16.844 } 00:14:16.844 ], 00:14:16.844 "mp_policy": "active_passive" 00:14:16.844 } 00:14:16.844 } 00:14:16.844 ] 00:14:16.844 14:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2496978 00:14:16.844 14:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:16.844 14:19:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:16.844 Running I/O for 10 seconds... 00:14:18.220 Latency(us) 00:14:18.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.220 Nvme0n1 : 1.00 23009.00 89.88 0.00 0.00 0.00 0.00 0.00 00:14:18.220 =================================================================================================================== 00:14:18.220 Total : 23009.00 89.88 0.00 0.00 0.00 0.00 0.00 00:14:18.220 00:14:18.787 14:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fd8e1c34-decc-4f5d-b736-e868c234f6dd 00:14:19.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.047 Nvme0n1 : 2.00 23019.00 89.92 0.00 0.00 0.00 0.00 0.00 00:14:19.047 =================================================================================================================== 00:14:19.047 Total : 23019.00 89.92 0.00 0.00 0.00 0.00 0.00 00:14:19.047 00:14:19.047 true 00:14:19.047 14:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd8e1c34-decc-4f5d-b736-e868c234f6dd 00:14:19.047 14:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:19.305 14:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:19.305 14:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:19.305 14:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2496978 00:14:19.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.874 Nvme0n1 : 3.00 23082.33 90.17 0.00 0.00 0.00 0.00 0.00 00:14:19.874 =================================================================================================================== 00:14:19.874 Total : 23082.33 90.17 0.00 0.00 0.00 0.00 0.00 00:14:19.874 00:14:21.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.266 Nvme0n1 : 4.00 23163.75 90.48 0.00 0.00 0.00 0.00 0.00 00:14:21.266 =================================================================================================================== 00:14:21.266 Total : 23163.75 90.48 0.00 0.00 0.00 0.00 0.00 00:14:21.266 00:14:22.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.203 Nvme0n1 : 5.00 23219.80 90.70 0.00 0.00 0.00 0.00 0.00 00:14:22.203 =================================================================================================================== 00:14:22.203 Total : 23219.80 90.70 0.00 0.00 0.00 0.00 0.00 00:14:22.203 00:14:23.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.140 Nvme0n1 : 6.00 23267.33 90.89 0.00 0.00 0.00 0.00 0.00 00:14:23.140 =================================================================================================================== 00:14:23.140 Total : 23267.33 90.89 0.00 0.00 0.00 0.00 0.00 00:14:23.140 00:14:24.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.078 Nvme0n1 : 7.00 23290.71 90.98 0.00 0.00 0.00 0.00 0.00 00:14:24.078 =================================================================================================================== 00:14:24.078 Total : 23290.71 90.98 0.00 0.00 0.00 0.00 0.00 00:14:24.078 00:14:25.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.017 Nvme0n1 : 8.00 23324.25 91.11 0.00 0.00 0.00 0.00 0.00 00:14:25.017 =================================================================================================================== 00:14:25.017 Total : 23324.25 91.11 0.00 0.00 0.00 0.00 0.00 00:14:25.017 00:14:25.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.952 Nvme0n1 : 9.00 23343.22 91.18 0.00 0.00 0.00 0.00 0.00 00:14:25.952 =================================================================================================================== 00:14:25.952 Total : 23343.22 91.18 0.00 0.00 0.00 0.00 0.00 00:14:25.952 00:14:26.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.888 Nvme0n1 : 10.00 23347.80 91.20 0.00 0.00 0.00 0.00 0.00 00:14:26.888 =================================================================================================================== 00:14:26.888 Total : 23347.80 91.20 0.00 0.00 0.00 0.00 0.00 00:14:26.888 00:14:26.888 00:14:26.888 Latency(us) 00:14:26.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.888 Nvme0n1 : 10.00 23353.21 91.22 0.00 0.00 5477.92 3219.81 12366.36 00:14:26.888 =================================================================================================================== 00:14:26.888 Total : 23353.21 91.22 0.00 0.00 5477.92 3219.81 12366.36 00:14:26.888 0 00:14:26.888 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2496746 00:14:26.888 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2496746 ']' 00:14:26.888 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2496746 00:14:27.145 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:27.145 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:27.145 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2496746 00:14:27.145 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:27.145 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:27.145 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2496746' 00:14:27.145 killing process with pid 2496746 00:14:27.145 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2496746 00:14:27.145 Received shutdown signal, test time was about 10.000000 seconds 00:14:27.145 00:14:27.145 Latency(us) 00:14:27.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.145 =================================================================================================================== 00:14:27.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:27.145 14:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2496746 00:14:27.145 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:27.403 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:27.661 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd8e1c34-decc-4f5d-b736-e868c234f6dd 00:14:27.661 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:27.661 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:27.661 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:27.661 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:27.920 [2024-07-12 14:19:19.802934] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:27.920 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd8e1c34-decc-4f5d-b736-e868c234f6dd 00:14:27.920 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:27.920 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd8e1c34-decc-4f5d-b736-e868c234f6dd 00:14:27.920 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.920 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.920 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.920 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.920 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.920 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.920 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.920 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:27.920 14:19:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd8e1c34-decc-4f5d-b736-e868c234f6dd 00:14:28.178 request: 00:14:28.178 { 00:14:28.178 "uuid": "fd8e1c34-decc-4f5d-b736-e868c234f6dd", 00:14:28.178 "method": "bdev_lvol_get_lvstores", 00:14:28.178 "req_id": 1 00:14:28.178 } 00:14:28.178 Got JSON-RPC error response 00:14:28.178 response: 00:14:28.178 { 00:14:28.178 "code": -19, 00:14:28.179 "message": "No such device" 00:14:28.179 } 00:14:28.179 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:28.179 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:28.179 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:28.179 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:28.179 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:28.436 aio_bdev 00:14:28.436 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 165c0530-b484-4899-9237-d55b91386298 00:14:28.436 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=165c0530-b484-4899-9237-d55b91386298 00:14:28.436 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:28.436 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:28.436 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:28.436 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:28.436 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:28.436 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 165c0530-b484-4899-9237-d55b91386298 -t 2000 00:14:28.696 [ 00:14:28.696 { 00:14:28.696 "name": "165c0530-b484-4899-9237-d55b91386298", 00:14:28.696 "aliases": [ 00:14:28.696 "lvs/lvol" 00:14:28.696 ], 00:14:28.696 "product_name": "Logical Volume", 00:14:28.696 "block_size": 4096, 00:14:28.696 "num_blocks": 38912, 00:14:28.696 "uuid": "165c0530-b484-4899-9237-d55b91386298", 00:14:28.696 "assigned_rate_limits": { 00:14:28.696 "rw_ios_per_sec": 0, 00:14:28.696 "rw_mbytes_per_sec": 0, 00:14:28.696 "r_mbytes_per_sec": 0, 00:14:28.696 "w_mbytes_per_sec": 0 00:14:28.696 }, 00:14:28.696 "claimed": false, 00:14:28.696 "zoned": false, 00:14:28.696 "supported_io_types": { 00:14:28.696 "read": true, 00:14:28.696 "write": true, 00:14:28.696 "unmap": true, 00:14:28.696 "flush": false, 00:14:28.696 "reset": true, 00:14:28.696 "nvme_admin": false, 00:14:28.696 "nvme_io": false, 00:14:28.696 "nvme_io_md": false, 00:14:28.696 "write_zeroes": true, 00:14:28.696 "zcopy": false, 00:14:28.696 "get_zone_info": false, 00:14:28.696 "zone_management": false, 00:14:28.696 "zone_append": false, 00:14:28.696 "compare": false, 00:14:28.696 "compare_and_write": false, 00:14:28.696 "abort": false, 00:14:28.696 "seek_hole": true, 00:14:28.696 "seek_data": true, 00:14:28.696 "copy": false, 00:14:28.696 "nvme_iov_md": false 00:14:28.696 }, 00:14:28.696 "driver_specific": { 00:14:28.696 "lvol": { 00:14:28.696 "lvol_store_uuid": "fd8e1c34-decc-4f5d-b736-e868c234f6dd", 00:14:28.696 "base_bdev": "aio_bdev", 00:14:28.696 "thin_provision": false, 00:14:28.696 "num_allocated_clusters": 38, 00:14:28.696 "snapshot": false, 00:14:28.696 "clone": false, 00:14:28.696 "esnap_clone": false 00:14:28.696 } 00:14:28.696 } 00:14:28.696 } 00:14:28.696 ] 00:14:28.696 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:28.696 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd8e1c34-decc-4f5d-b736-e868c234f6dd 00:14:28.696 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:28.954 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:28.954 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd8e1c34-decc-4f5d-b736-e868c234f6dd 00:14:28.954 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:28.954 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:28.954 14:19:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 165c0530-b484-4899-9237-d55b91386298 00:14:29.212 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fd8e1c34-decc-4f5d-b736-e868c234f6dd 00:14:29.510 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:29.510 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:29.510 00:14:29.510 real 0m15.742s 00:14:29.510 user 0m15.467s 00:14:29.510 sys 0m1.376s 00:14:29.510 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:29.510 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:29.510 ************************************ 00:14:29.510 END TEST lvs_grow_clean 00:14:29.510 ************************************ 00:14:29.510 14:19:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:29.510 14:19:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:29.510 14:19:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:29.510 14:19:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.511 14:19:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:29.511 ************************************ 00:14:29.511 START TEST lvs_grow_dirty 00:14:29.511 ************************************ 00:14:29.511 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:29.511 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:29.511 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:29.511 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:29.511 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:29.511 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:29.511 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:29.511 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:29.511 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:29.511 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:29.773 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:29.773 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:30.030 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c43550b3-25da-4d1f-a762-97110913fcba 00:14:30.030 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43550b3-25da-4d1f-a762-97110913fcba 00:14:30.030 14:19:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:30.287 14:19:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:30.287 14:19:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:30.287 14:19:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c43550b3-25da-4d1f-a762-97110913fcba lvol 150 00:14:30.287 14:19:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5aba18f1-5926-4a13-a6a6-95c3ca66bac8 00:14:30.287 14:19:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:30.287 14:19:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:30.545 [2024-07-12 14:19:22.376956] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:30.545 [2024-07-12 14:19:22.377006] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:30.545 true 00:14:30.545 14:19:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:30.545 14:19:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43550b3-25da-4d1f-a762-97110913fcba 00:14:30.804 14:19:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:30.804 14:19:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:30.804 14:19:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5aba18f1-5926-4a13-a6a6-95c3ca66bac8 00:14:31.061 14:19:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:31.061 [2024-07-12 14:19:23.034938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.061 14:19:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:31.319 14:19:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:31.320 14:19:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2499545 00:14:31.320 14:19:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:31.320 14:19:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2499545 /var/tmp/bdevperf.sock 00:14:31.320 14:19:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2499545 ']' 00:14:31.320 14:19:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.320 14:19:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.320 14:19:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.320 14:19:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.320 14:19:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:31.320 [2024-07-12 14:19:23.231497] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:14:31.320 [2024-07-12 14:19:23.231540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499545 ] 00:14:31.320 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.320 [2024-07-12 14:19:23.284867] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.578 [2024-07-12 14:19:23.364408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.146 14:19:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.146 14:19:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:32.146 14:19:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:32.405 Nvme0n1 00:14:32.405 14:19:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:32.664 [ 00:14:32.664 { 00:14:32.664 "name": "Nvme0n1", 00:14:32.664 "aliases": [ 00:14:32.664 "5aba18f1-5926-4a13-a6a6-95c3ca66bac8" 00:14:32.664 ], 00:14:32.664 "product_name": "NVMe disk", 00:14:32.664 "block_size": 4096, 00:14:32.664 "num_blocks": 38912, 00:14:32.664 "uuid": "5aba18f1-5926-4a13-a6a6-95c3ca66bac8", 00:14:32.664 "assigned_rate_limits": { 00:14:32.664 "rw_ios_per_sec": 0, 00:14:32.664 "rw_mbytes_per_sec": 0, 00:14:32.664 "r_mbytes_per_sec": 0, 00:14:32.664 "w_mbytes_per_sec": 0 00:14:32.664 }, 00:14:32.664 "claimed": false, 00:14:32.664 "zoned": false, 00:14:32.664 "supported_io_types": { 00:14:32.664 "read": true, 00:14:32.664 "write": true, 00:14:32.664 "unmap": true, 00:14:32.664 "flush": true, 00:14:32.664 "reset": true, 00:14:32.664 "nvme_admin": true, 00:14:32.664 "nvme_io": true, 00:14:32.664 "nvme_io_md": false, 00:14:32.664 "write_zeroes": true, 00:14:32.664 "zcopy": false, 00:14:32.664 "get_zone_info": false, 00:14:32.664 "zone_management": false, 00:14:32.664 "zone_append": false, 00:14:32.664 "compare": true, 00:14:32.664 "compare_and_write": true, 00:14:32.664 "abort": true, 00:14:32.664 "seek_hole": false, 00:14:32.664 "seek_data": false, 00:14:32.664 "copy": true, 00:14:32.664 "nvme_iov_md": false 00:14:32.664 }, 00:14:32.664 "memory_domains": [ 00:14:32.664 { 00:14:32.664 "dma_device_id": "system", 00:14:32.664 "dma_device_type": 1 00:14:32.664 } 00:14:32.664 ], 00:14:32.664 "driver_specific": { 00:14:32.664 "nvme": [ 00:14:32.664 { 00:14:32.664 "trid": { 00:14:32.664 "trtype": "TCP", 00:14:32.664 "adrfam": "IPv4", 00:14:32.664 "traddr": "10.0.0.2", 00:14:32.664 "trsvcid": "4420", 00:14:32.664 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:32.664 }, 00:14:32.664 "ctrlr_data": { 00:14:32.664 "cntlid": 1, 00:14:32.664 "vendor_id": "0x8086", 00:14:32.664 "model_number": "SPDK bdev Controller", 00:14:32.664 "serial_number": "SPDK0", 00:14:32.664 "firmware_revision": "24.09", 00:14:32.664 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:32.664 "oacs": { 00:14:32.664 "security": 0, 00:14:32.664 "format": 0, 00:14:32.664 "firmware": 0, 00:14:32.664 "ns_manage": 0 00:14:32.664 }, 00:14:32.664 "multi_ctrlr": true, 00:14:32.664 "ana_reporting": false 00:14:32.664 }, 00:14:32.664 "vs": { 00:14:32.664 "nvme_version": "1.3" 00:14:32.664 }, 00:14:32.664 "ns_data": { 00:14:32.664 "id": 1, 00:14:32.664 "can_share": true 00:14:32.664 } 00:14:32.664 } 00:14:32.664 ], 00:14:32.664 "mp_policy": "active_passive" 00:14:32.664 } 00:14:32.664 } 00:14:32.664 ] 00:14:32.664 14:19:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:32.664 14:19:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2499762 00:14:32.664 14:19:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:32.664 Running I/O for 10 seconds... 00:14:34.043 Latency(us) 00:14:34.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.043 Nvme0n1 : 1.00 23065.00 90.10 0.00 0.00 0.00 0.00 0.00 00:14:34.043 =================================================================================================================== 00:14:34.043 Total : 23065.00 90.10 0.00 0.00 0.00 0.00 0.00 00:14:34.043 00:14:34.611 14:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c43550b3-25da-4d1f-a762-97110913fcba 00:14:34.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.870 Nvme0n1 : 2.00 23249.00 90.82 0.00 0.00 0.00 0.00 0.00 00:14:34.870 =================================================================================================================== 00:14:34.870 Total : 23249.00 90.82 0.00 0.00 0.00 0.00 0.00 00:14:34.870 00:14:34.870 true 00:14:34.870 14:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43550b3-25da-4d1f-a762-97110913fcba 00:14:34.870 14:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:35.128 14:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:35.128 14:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:35.128 14:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2499762 00:14:35.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.694 Nvme0n1 : 3.00 23288.67 90.97 0.00 0.00 0.00 0.00 0.00 00:14:35.694 =================================================================================================================== 00:14:35.694 Total : 23288.67 90.97 0.00 0.00 0.00 0.00 0.00 00:14:35.694 00:14:37.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.072 Nvme0n1 : 4.00 23325.50 91.12 0.00 0.00 0.00 0.00 0.00 00:14:37.072 =================================================================================================================== 00:14:37.072 Total : 23325.50 91.12 0.00 0.00 0.00 0.00 0.00 00:14:37.072 00:14:37.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.641 Nvme0n1 : 5.00 23359.40 91.25 0.00 0.00 0.00 0.00 0.00 00:14:37.641 =================================================================================================================== 00:14:37.641 Total : 23359.40 91.25 0.00 0.00 0.00 0.00 0.00 00:14:37.641 00:14:39.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.017 Nvme0n1 : 6.00 23331.17 91.14 0.00 0.00 0.00 0.00 0.00 00:14:39.017 =================================================================================================================== 00:14:39.017 Total : 23331.17 91.14 0.00 0.00 0.00 0.00 0.00 00:14:39.017 00:14:39.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.955 Nvme0n1 : 7.00 23354.57 91.23 0.00 0.00 0.00 0.00 0.00 00:14:39.955 =================================================================================================================== 00:14:39.955 Total : 23354.57 91.23 0.00 0.00 0.00 0.00 0.00 00:14:39.955 00:14:40.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.892 Nvme0n1 : 8.00 23380.25 91.33 0.00 0.00 0.00 0.00 0.00 00:14:40.892 =================================================================================================================== 00:14:40.892 Total : 23380.25 91.33 0.00 0.00 0.00 0.00 0.00 00:14:40.892 00:14:41.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.829 Nvme0n1 : 9.00 23393.22 91.38 0.00 0.00 0.00 0.00 0.00 00:14:41.829 =================================================================================================================== 00:14:41.830 Total : 23393.22 91.38 0.00 0.00 0.00 0.00 0.00 00:14:41.830 00:14:42.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.765 Nvme0n1 : 10.00 23396.60 91.39 0.00 0.00 0.00 0.00 0.00 00:14:42.765 =================================================================================================================== 00:14:42.765 Total : 23396.60 91.39 0.00 0.00 0.00 0.00 0.00 00:14:42.765 00:14:42.765 00:14:42.765 Latency(us) 00:14:42.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.765 Nvme0n1 : 10.00 23399.90 91.41 0.00 0.00 5467.24 1759.50 10485.76 00:14:42.765 =================================================================================================================== 00:14:42.765 Total : 23399.90 91.41 0.00 0.00 5467.24 1759.50 10485.76 00:14:42.765 0 00:14:42.766 14:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2499545 00:14:42.766 14:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2499545 ']' 00:14:42.766 14:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2499545 00:14:42.766 14:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:42.766 14:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:42.766 14:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2499545 00:14:42.766 14:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:42.766 14:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:42.766 14:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2499545' 00:14:42.766 killing process with pid 2499545 00:14:42.766 14:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2499545 00:14:42.766 Received shutdown signal, test time was about 10.000000 seconds 00:14:42.766 00:14:42.766 Latency(us) 00:14:42.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.766 =================================================================================================================== 00:14:42.766 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:42.766 14:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2499545 00:14:43.025 14:19:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.284 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:43.284 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43550b3-25da-4d1f-a762-97110913fcba 00:14:43.284 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2496242 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2496242 00:14:43.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2496242 Killed "${NVMF_APP[@]}" "$@" 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2501435 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2501435 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2501435 ']' 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.543 14:19:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:43.543 [2024-07-12 14:19:35.520881] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:14:43.543 [2024-07-12 14:19:35.520927] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.543 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.802 [2024-07-12 14:19:35.577736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.802 [2024-07-12 14:19:35.656051] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.802 [2024-07-12 14:19:35.656085] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.802 [2024-07-12 14:19:35.656092] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.802 [2024-07-12 14:19:35.656098] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.802 [2024-07-12 14:19:35.656103] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.802 [2024-07-12 14:19:35.656119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.370 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.370 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:44.370 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:44.370 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:44.370 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:44.370 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.370 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:44.630 [2024-07-12 14:19:36.501551] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:44.630 [2024-07-12 14:19:36.501631] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:44.630 [2024-07-12 14:19:36.501654] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:44.630 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:44.630 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5aba18f1-5926-4a13-a6a6-95c3ca66bac8 00:14:44.630 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=5aba18f1-5926-4a13-a6a6-95c3ca66bac8 00:14:44.630 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:44.630 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:44.630 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:44.630 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:44.630 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:44.889 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5aba18f1-5926-4a13-a6a6-95c3ca66bac8 -t 2000 00:14:44.889 [ 00:14:44.889 { 00:14:44.889 "name": "5aba18f1-5926-4a13-a6a6-95c3ca66bac8", 00:14:44.889 "aliases": [ 00:14:44.889 "lvs/lvol" 00:14:44.889 ], 00:14:44.889 "product_name": "Logical Volume", 00:14:44.889 "block_size": 4096, 00:14:44.889 "num_blocks": 38912, 00:14:44.889 "uuid": "5aba18f1-5926-4a13-a6a6-95c3ca66bac8", 00:14:44.889 "assigned_rate_limits": { 00:14:44.889 "rw_ios_per_sec": 0, 00:14:44.889 "rw_mbytes_per_sec": 0, 00:14:44.889 "r_mbytes_per_sec": 0, 00:14:44.889 "w_mbytes_per_sec": 0 00:14:44.889 }, 00:14:44.889 "claimed": false, 00:14:44.889 "zoned": false, 00:14:44.889 "supported_io_types": { 00:14:44.889 "read": true, 00:14:44.889 "write": true, 00:14:44.889 "unmap": true, 00:14:44.889 "flush": false, 00:14:44.889 "reset": true, 00:14:44.889 "nvme_admin": false, 00:14:44.889 "nvme_io": false, 00:14:44.889 "nvme_io_md": false, 00:14:44.889 "write_zeroes": true, 00:14:44.889 "zcopy": false, 00:14:44.889 "get_zone_info": false, 00:14:44.889 "zone_management": false, 00:14:44.889 "zone_append": false, 00:14:44.889 "compare": false, 00:14:44.889 "compare_and_write": false, 00:14:44.889 "abort": false, 00:14:44.889 "seek_hole": true, 00:14:44.889 "seek_data": true, 00:14:44.889 "copy": false, 00:14:44.889 "nvme_iov_md": false 00:14:44.889 }, 00:14:44.889 "driver_specific": { 00:14:44.889 "lvol": { 00:14:44.889 "lvol_store_uuid": "c43550b3-25da-4d1f-a762-97110913fcba", 00:14:44.889 "base_bdev": "aio_bdev", 00:14:44.889 "thin_provision": false, 00:14:44.889 "num_allocated_clusters": 38, 00:14:44.889 "snapshot": false, 00:14:44.889 "clone": false, 00:14:44.889 "esnap_clone": false 00:14:44.889 } 00:14:44.889 } 00:14:44.889 } 00:14:44.889 ] 00:14:44.889 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:44.889 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43550b3-25da-4d1f-a762-97110913fcba 00:14:44.889 14:19:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:45.149 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:45.149 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:45.149 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43550b3-25da-4d1f-a762-97110913fcba 00:14:45.408 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:45.408 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:45.408 [2024-07-12 14:19:37.345943] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:45.408 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43550b3-25da-4d1f-a762-97110913fcba 00:14:45.408 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:45.408 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43550b3-25da-4d1f-a762-97110913fcba 00:14:45.408 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.408 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.408 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.408 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.408 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.408 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.408 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.408 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:45.408 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43550b3-25da-4d1f-a762-97110913fcba 00:14:45.668 request: 00:14:45.668 { 00:14:45.668 "uuid": "c43550b3-25da-4d1f-a762-97110913fcba", 00:14:45.668 "method": "bdev_lvol_get_lvstores", 00:14:45.668 "req_id": 1 00:14:45.668 } 00:14:45.668 Got JSON-RPC error response 00:14:45.668 response: 00:14:45.668 { 00:14:45.668 "code": -19, 00:14:45.668 "message": "No such device" 00:14:45.668 } 00:14:45.668 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:45.668 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:45.668 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:45.668 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:45.668 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:45.927 aio_bdev 00:14:45.927 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5aba18f1-5926-4a13-a6a6-95c3ca66bac8 00:14:45.927 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=5aba18f1-5926-4a13-a6a6-95c3ca66bac8 00:14:45.927 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:45.927 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:45.927 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:45.927 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:45.927 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:45.927 14:19:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5aba18f1-5926-4a13-a6a6-95c3ca66bac8 -t 2000 00:14:46.186 [ 00:14:46.186 { 00:14:46.186 "name": "5aba18f1-5926-4a13-a6a6-95c3ca66bac8", 00:14:46.186 "aliases": [ 00:14:46.186 "lvs/lvol" 00:14:46.186 ], 00:14:46.186 "product_name": "Logical Volume", 00:14:46.186 "block_size": 4096, 00:14:46.186 "num_blocks": 38912, 00:14:46.186 "uuid": "5aba18f1-5926-4a13-a6a6-95c3ca66bac8", 00:14:46.186 "assigned_rate_limits": { 00:14:46.186 "rw_ios_per_sec": 0, 00:14:46.186 "rw_mbytes_per_sec": 0, 00:14:46.186 "r_mbytes_per_sec": 0, 00:14:46.186 "w_mbytes_per_sec": 0 00:14:46.186 }, 00:14:46.186 "claimed": false, 00:14:46.186 "zoned": false, 00:14:46.186 "supported_io_types": { 00:14:46.186 "read": true, 00:14:46.186 "write": true, 00:14:46.186 "unmap": true, 00:14:46.186 "flush": false, 00:14:46.186 "reset": true, 00:14:46.186 "nvme_admin": false, 00:14:46.186 "nvme_io": false, 00:14:46.186 "nvme_io_md": false, 00:14:46.186 "write_zeroes": true, 00:14:46.186 "zcopy": false, 00:14:46.186 "get_zone_info": false, 00:14:46.186 "zone_management": false, 00:14:46.186 "zone_append": false, 00:14:46.186 "compare": false, 00:14:46.186 "compare_and_write": false, 00:14:46.186 "abort": false, 00:14:46.186 "seek_hole": true, 00:14:46.186 "seek_data": true, 00:14:46.186 "copy": false, 00:14:46.186 "nvme_iov_md": false 00:14:46.186 }, 00:14:46.186 "driver_specific": { 00:14:46.186 "lvol": { 00:14:46.186 "lvol_store_uuid": "c43550b3-25da-4d1f-a762-97110913fcba", 00:14:46.186 "base_bdev": "aio_bdev", 00:14:46.186 "thin_provision": false, 00:14:46.186 "num_allocated_clusters": 38, 00:14:46.186 "snapshot": false, 00:14:46.186 "clone": false, 00:14:46.186 "esnap_clone": false 00:14:46.186 } 00:14:46.186 } 00:14:46.187 } 00:14:46.187 ] 00:14:46.187 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:46.187 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43550b3-25da-4d1f-a762-97110913fcba 00:14:46.187 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:46.445 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:46.445 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43550b3-25da-4d1f-a762-97110913fcba 00:14:46.445 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:46.445 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:46.445 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5aba18f1-5926-4a13-a6a6-95c3ca66bac8 00:14:46.705 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c43550b3-25da-4d1f-a762-97110913fcba 00:14:46.964 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:46.964 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:46.964 00:14:46.964 real 0m17.439s 00:14:46.964 user 0m44.987s 00:14:46.964 sys 0m3.596s 00:14:46.964 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:46.964 14:19:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:46.964 ************************************ 00:14:46.964 END TEST lvs_grow_dirty 00:14:46.964 ************************************ 00:14:46.964 14:19:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:46.964 14:19:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:46.964 14:19:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:46.964 14:19:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:46.964 14:19:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:46.964 14:19:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:46.964 14:19:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:46.964 14:19:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:46.964 14:19:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:46.964 14:19:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:46.964 nvmf_trace.0 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:47.223 rmmod nvme_tcp 00:14:47.223 rmmod nvme_fabrics 00:14:47.223 rmmod nvme_keyring 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2501435 ']' 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2501435 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2501435 ']' 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2501435 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2501435 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2501435' 00:14:47.223 killing process with pid 2501435 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2501435 00:14:47.223 14:19:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2501435 00:14:47.483 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:47.483 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:47.483 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:47.483 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:47.483 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:47.483 14:19:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.483 14:19:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.483 14:19:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.389 14:19:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:49.389 00:14:49.389 real 0m42.127s 00:14:49.389 user 1m6.159s 00:14:49.389 sys 0m9.288s 00:14:49.389 14:19:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:49.389 14:19:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:49.389 ************************************ 00:14:49.389 END TEST nvmf_lvs_grow 00:14:49.389 ************************************ 00:14:49.389 14:19:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:49.389 14:19:41 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:49.389 14:19:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:49.389 14:19:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:49.389 14:19:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:49.647 ************************************ 00:14:49.647 START TEST nvmf_bdev_io_wait 00:14:49.647 ************************************ 00:14:49.647 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:49.647 * Looking for test storage... 00:14:49.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:49.647 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:49.647 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:49.647 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.647 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.647 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.647 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.647 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.647 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.647 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.647 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.647 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.647 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.647 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:49.648 14:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:55.003 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:55.003 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:55.003 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:55.004 Found net devices under 0000:86:00.0: cvl_0_0 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:55.004 Found net devices under 0000:86:00.1: cvl_0_1 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:55.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:14:55.004 00:14:55.004 --- 10.0.0.2 ping statistics --- 00:14:55.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.004 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:55.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:14:55.004 00:14:55.004 --- 10.0.0.1 ping statistics --- 00:14:55.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.004 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2505457 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2505457 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2505457 ']' 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.004 14:19:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:55.004 [2024-07-12 14:19:46.484553] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:14:55.004 [2024-07-12 14:19:46.484599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.004 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.004 [2024-07-12 14:19:46.542462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:55.004 [2024-07-12 14:19:46.623755] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.004 [2024-07-12 14:19:46.623794] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.004 [2024-07-12 14:19:46.623801] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.004 [2024-07-12 14:19:46.623808] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.004 [2024-07-12 14:19:46.623813] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.004 [2024-07-12 14:19:46.624052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.004 [2024-07-12 14:19:46.624129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.004 [2024-07-12 14:19:46.624259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:55.004 [2024-07-12 14:19:46.624261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:55.574 [2024-07-12 14:19:47.402261] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:55.574 Malloc0 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:55.574 [2024-07-12 14:19:47.461471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2505706 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2505708 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:55.574 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:55.574 { 00:14:55.575 "params": { 00:14:55.575 "name": "Nvme$subsystem", 00:14:55.575 "trtype": "$TEST_TRANSPORT", 00:14:55.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:55.575 "adrfam": "ipv4", 00:14:55.575 "trsvcid": "$NVMF_PORT", 00:14:55.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:55.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:55.575 "hdgst": ${hdgst:-false}, 00:14:55.575 "ddgst": ${ddgst:-false} 00:14:55.575 }, 00:14:55.575 "method": "bdev_nvme_attach_controller" 00:14:55.575 } 00:14:55.575 EOF 00:14:55.575 )") 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2505710 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:55.575 { 00:14:55.575 "params": { 00:14:55.575 "name": "Nvme$subsystem", 00:14:55.575 "trtype": "$TEST_TRANSPORT", 00:14:55.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:55.575 "adrfam": "ipv4", 00:14:55.575 "trsvcid": "$NVMF_PORT", 00:14:55.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:55.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:55.575 "hdgst": ${hdgst:-false}, 00:14:55.575 "ddgst": ${ddgst:-false} 00:14:55.575 }, 00:14:55.575 "method": "bdev_nvme_attach_controller" 00:14:55.575 } 00:14:55.575 EOF 00:14:55.575 )") 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2505713 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:55.575 { 00:14:55.575 "params": { 00:14:55.575 "name": "Nvme$subsystem", 00:14:55.575 "trtype": "$TEST_TRANSPORT", 00:14:55.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:55.575 "adrfam": "ipv4", 00:14:55.575 "trsvcid": "$NVMF_PORT", 00:14:55.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:55.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:55.575 "hdgst": ${hdgst:-false}, 00:14:55.575 "ddgst": ${ddgst:-false} 00:14:55.575 }, 00:14:55.575 "method": "bdev_nvme_attach_controller" 00:14:55.575 } 00:14:55.575 EOF 00:14:55.575 )") 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:55.575 { 00:14:55.575 "params": { 00:14:55.575 "name": "Nvme$subsystem", 00:14:55.575 "trtype": "$TEST_TRANSPORT", 00:14:55.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:55.575 "adrfam": "ipv4", 00:14:55.575 "trsvcid": "$NVMF_PORT", 00:14:55.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:55.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:55.575 "hdgst": ${hdgst:-false}, 00:14:55.575 "ddgst": ${ddgst:-false} 00:14:55.575 }, 00:14:55.575 "method": "bdev_nvme_attach_controller" 00:14:55.575 } 00:14:55.575 EOF 00:14:55.575 )") 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2505706 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:55.575 "params": { 00:14:55.575 "name": "Nvme1", 00:14:55.575 "trtype": "tcp", 00:14:55.575 "traddr": "10.0.0.2", 00:14:55.575 "adrfam": "ipv4", 00:14:55.575 "trsvcid": "4420", 00:14:55.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.575 "hdgst": false, 00:14:55.575 "ddgst": false 00:14:55.575 }, 00:14:55.575 "method": "bdev_nvme_attach_controller" 00:14:55.575 }' 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:55.575 "params": { 00:14:55.575 "name": "Nvme1", 00:14:55.575 "trtype": "tcp", 00:14:55.575 "traddr": "10.0.0.2", 00:14:55.575 "adrfam": "ipv4", 00:14:55.575 "trsvcid": "4420", 00:14:55.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.575 "hdgst": false, 00:14:55.575 "ddgst": false 00:14:55.575 }, 00:14:55.575 "method": "bdev_nvme_attach_controller" 00:14:55.575 }' 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:55.575 "params": { 00:14:55.575 "name": "Nvme1", 00:14:55.575 "trtype": "tcp", 00:14:55.575 "traddr": "10.0.0.2", 00:14:55.575 "adrfam": "ipv4", 00:14:55.575 "trsvcid": "4420", 00:14:55.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.575 "hdgst": false, 00:14:55.575 "ddgst": false 00:14:55.575 }, 00:14:55.575 "method": "bdev_nvme_attach_controller" 00:14:55.575 }' 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:55.575 14:19:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:55.575 "params": { 00:14:55.575 "name": "Nvme1", 00:14:55.575 "trtype": "tcp", 00:14:55.575 "traddr": "10.0.0.2", 00:14:55.575 "adrfam": "ipv4", 00:14:55.575 "trsvcid": "4420", 00:14:55.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.575 "hdgst": false, 00:14:55.575 "ddgst": false 00:14:55.575 }, 00:14:55.575 "method": "bdev_nvme_attach_controller" 00:14:55.575 }' 00:14:55.575 [2024-07-12 14:19:47.509286] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:14:55.575 [2024-07-12 14:19:47.509334] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:55.575 [2024-07-12 14:19:47.512274] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:14:55.575 [2024-07-12 14:19:47.512314] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:55.575 [2024-07-12 14:19:47.512886] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:14:55.575 [2024-07-12 14:19:47.512923] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:55.575 [2024-07-12 14:19:47.517120] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:14:55.575 [2024-07-12 14:19:47.517165] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:55.575 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.835 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.835 [2024-07-12 14:19:47.685128] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.835 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.835 [2024-07-12 14:19:47.762954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:55.835 [2024-07-12 14:19:47.772762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.835 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.094 [2024-07-12 14:19:47.851484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:56.094 [2024-07-12 14:19:47.878328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.094 [2024-07-12 14:19:47.922360] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.094 [2024-07-12 14:19:47.965283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:56.094 [2024-07-12 14:19:47.998466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:56.094 Running I/O for 1 seconds... 00:14:56.352 Running I/O for 1 seconds... 00:14:56.352 Running I/O for 1 seconds... 00:14:56.352 Running I/O for 1 seconds... 00:14:57.288 00:14:57.288 Latency(us) 00:14:57.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.288 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:57.288 Nvme1n1 : 1.01 12976.26 50.69 0.00 0.00 9834.82 5784.26 18008.15 00:14:57.288 =================================================================================================================== 00:14:57.288 Total : 12976.26 50.69 0.00 0.00 9834.82 5784.26 18008.15 00:14:57.288 00:14:57.288 Latency(us) 00:14:57.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.288 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:57.288 Nvme1n1 : 1.00 245766.13 960.02 0.00 0.00 519.16 210.14 641.11 00:14:57.288 =================================================================================================================== 00:14:57.288 Total : 245766.13 960.02 0.00 0.00 519.16 210.14 641.11 00:14:57.288 00:14:57.288 Latency(us) 00:14:57.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.288 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:57.288 Nvme1n1 : 1.01 11888.22 46.44 0.00 0.00 10732.24 1852.10 14588.88 00:14:57.288 =================================================================================================================== 00:14:57.288 Total : 11888.22 46.44 0.00 0.00 10732.24 1852.10 14588.88 00:14:57.288 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2505708 00:14:57.288 00:14:57.288 Latency(us) 00:14:57.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.288 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:57.288 Nvme1n1 : 1.01 10236.21 39.99 0.00 0.00 12465.23 5784.26 24618.74 00:14:57.288 =================================================================================================================== 00:14:57.288 Total : 10236.21 39.99 0.00 0.00 12465.23 5784.26 24618.74 00:14:57.547 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2505710 00:14:57.547 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2505713 00:14:57.547 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.547 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.547 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.547 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.547 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:57.547 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:57.547 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:57.547 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:57.547 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:57.547 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:57.547 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:57.547 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:57.547 rmmod nvme_tcp 00:14:57.806 rmmod nvme_fabrics 00:14:57.806 rmmod nvme_keyring 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2505457 ']' 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2505457 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2505457 ']' 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2505457 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2505457 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2505457' 00:14:57.806 killing process with pid 2505457 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2505457 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2505457 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.806 14:19:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.342 14:19:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:00.342 00:15:00.342 real 0m10.459s 00:15:00.342 user 0m19.580s 00:15:00.342 sys 0m5.405s 00:15:00.342 14:19:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:00.342 14:19:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:00.342 ************************************ 00:15:00.342 END TEST nvmf_bdev_io_wait 00:15:00.342 ************************************ 00:15:00.342 14:19:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:00.342 14:19:51 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:00.342 14:19:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:00.342 14:19:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.342 14:19:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:00.342 ************************************ 00:15:00.342 START TEST nvmf_queue_depth 00:15:00.342 ************************************ 00:15:00.342 14:19:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:00.342 * Looking for test storage... 00:15:00.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:00.342 14:19:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:00.343 14:19:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:00.343 14:19:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:00.343 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:00.343 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.343 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:00.343 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:00.343 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:00.343 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.343 14:19:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.343 14:19:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.343 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:00.343 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:00.343 14:19:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:00.343 14:19:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:05.609 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:05.609 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:05.609 Found net devices under 0000:86:00.0: cvl_0_0 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:05.609 Found net devices under 0000:86:00.1: cvl_0_1 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:05.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:15:05.609 00:15:05.609 --- 10.0.0.2 ping statistics --- 00:15:05.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.609 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:05.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:15:05.609 00:15:05.609 --- 10.0.0.1 ping statistics --- 00:15:05.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.609 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:05.609 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.610 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:05.610 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:05.610 14:19:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:05.610 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:05.610 14:19:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:05.610 14:19:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:05.610 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2509487 00:15:05.610 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2509487 00:15:05.610 14:19:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:05.610 14:19:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2509487 ']' 00:15:05.610 14:19:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.610 14:19:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:05.610 14:19:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.610 14:19:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:05.610 14:19:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:05.610 [2024-07-12 14:19:57.500930] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:15:05.610 [2024-07-12 14:19:57.500976] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.610 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.610 [2024-07-12 14:19:57.557758] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.869 [2024-07-12 14:19:57.637878] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.869 [2024-07-12 14:19:57.637912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.869 [2024-07-12 14:19:57.637919] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.869 [2024-07-12 14:19:57.637925] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.869 [2024-07-12 14:19:57.637930] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.869 [2024-07-12 14:19:57.637947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.436 [2024-07-12 14:19:58.345414] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.436 Malloc0 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.436 [2024-07-12 14:19:58.400140] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2509732 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2509732 /var/tmp/bdevperf.sock 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2509732 ']' 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:06.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.436 14:19:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.694 [2024-07-12 14:19:58.449784] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:15:06.694 [2024-07-12 14:19:58.449824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2509732 ] 00:15:06.694 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.694 [2024-07-12 14:19:58.503815] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.694 [2024-07-12 14:19:58.582862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.261 14:19:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.261 14:19:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:07.261 14:19:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:07.261 14:19:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.261 14:19:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:07.520 NVMe0n1 00:15:07.520 14:19:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.520 14:19:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:07.520 Running I/O for 10 seconds... 00:15:19.729 00:15:19.729 Latency(us) 00:15:19.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.729 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:19.729 Verification LBA range: start 0x0 length 0x4000 00:15:19.730 NVMe0n1 : 10.06 12382.46 48.37 0.00 0.00 82425.55 19603.81 57215.78 00:15:19.730 =================================================================================================================== 00:15:19.730 Total : 12382.46 48.37 0.00 0.00 82425.55 19603.81 57215.78 00:15:19.730 0 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2509732 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2509732 ']' 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2509732 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2509732 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2509732' 00:15:19.730 killing process with pid 2509732 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2509732 00:15:19.730 Received shutdown signal, test time was about 10.000000 seconds 00:15:19.730 00:15:19.730 Latency(us) 00:15:19.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.730 =================================================================================================================== 00:15:19.730 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2509732 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:19.730 rmmod nvme_tcp 00:15:19.730 rmmod nvme_fabrics 00:15:19.730 rmmod nvme_keyring 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2509487 ']' 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2509487 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2509487 ']' 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2509487 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2509487 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2509487' 00:15:19.730 killing process with pid 2509487 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2509487 00:15:19.730 14:20:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2509487 00:15:19.730 14:20:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:19.730 14:20:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:19.730 14:20:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:19.730 14:20:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.730 14:20:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:19.730 14:20:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.730 14:20:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.730 14:20:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.298 14:20:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:20.298 00:15:20.298 real 0m20.205s 00:15:20.298 user 0m24.829s 00:15:20.298 sys 0m5.596s 00:15:20.298 14:20:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:20.298 14:20:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:20.298 ************************************ 00:15:20.298 END TEST nvmf_queue_depth 00:15:20.298 ************************************ 00:15:20.298 14:20:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:20.298 14:20:12 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:20.298 14:20:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:20.298 14:20:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:20.298 14:20:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:20.298 ************************************ 00:15:20.298 START TEST nvmf_target_multipath 00:15:20.298 ************************************ 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:20.298 * Looking for test storage... 00:15:20.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:20.298 14:20:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.299 14:20:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.299 14:20:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.299 14:20:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.299 14:20:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.299 14:20:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.299 14:20:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:20.299 14:20:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.299 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:20.299 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:20.299 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:20.299 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.299 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.299 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.299 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:20.299 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:20.299 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:20.558 14:20:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:25.831 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.831 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:25.832 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:25.832 Found net devices under 0000:86:00.0: cvl_0_0 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:25.832 Found net devices under 0000:86:00.1: cvl_0_1 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:25.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:15:25.832 00:15:25.832 --- 10.0.0.2 ping statistics --- 00:15:25.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.832 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:15:25.832 00:15:25.832 --- 10.0.0.1 ping statistics --- 00:15:25.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.832 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:25.832 only one NIC for nvmf test 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.832 rmmod nvme_tcp 00:15:25.832 rmmod nvme_fabrics 00:15:25.832 rmmod nvme_keyring 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.832 14:20:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:27.741 00:15:27.741 real 0m7.250s 00:15:27.741 user 0m1.366s 00:15:27.741 sys 0m3.838s 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:27.741 14:20:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:27.741 ************************************ 00:15:27.741 END TEST nvmf_target_multipath 00:15:27.741 ************************************ 00:15:27.741 14:20:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:27.741 14:20:19 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:27.741 14:20:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:27.741 14:20:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.741 14:20:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:27.741 ************************************ 00:15:27.741 START TEST nvmf_zcopy 00:15:27.741 ************************************ 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:27.741 * Looking for test storage... 00:15:27.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:27.741 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:27.742 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.742 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:27.742 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:27.742 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:27.742 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.742 14:20:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.742 14:20:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.742 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:27.742 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:27.742 14:20:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:27.742 14:20:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.088 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:33.088 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:33.088 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:33.088 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:33.088 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:33.089 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:33.089 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:33.089 Found net devices under 0000:86:00.0: cvl_0_0 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:33.089 Found net devices under 0000:86:00.1: cvl_0_1 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:33.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:15:33.089 00:15:33.089 --- 10.0.0.2 ping statistics --- 00:15:33.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.089 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:33.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:15:33.089 00:15:33.089 --- 10.0.0.1 ping statistics --- 00:15:33.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.089 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2518365 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2518365 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2518365 ']' 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.089 14:20:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.089 [2024-07-12 14:20:25.003461] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:15:33.089 [2024-07-12 14:20:25.003504] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.089 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.089 [2024-07-12 14:20:25.060152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.347 [2024-07-12 14:20:25.139022] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.347 [2024-07-12 14:20:25.139056] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.347 [2024-07-12 14:20:25.139066] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.347 [2024-07-12 14:20:25.139072] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.347 [2024-07-12 14:20:25.139078] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.347 [2024-07-12 14:20:25.139094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.913 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:33.913 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:15:33.913 14:20:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:33.913 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:33.913 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.913 14:20:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.913 14:20:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:33.913 14:20:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:33.913 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.913 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.913 [2024-07-12 14:20:25.853392] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.913 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.913 14:20:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:33.913 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.914 [2024-07-12 14:20:25.877522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.914 malloc0 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:33.914 { 00:15:33.914 "params": { 00:15:33.914 "name": "Nvme$subsystem", 00:15:33.914 "trtype": "$TEST_TRANSPORT", 00:15:33.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:33.914 "adrfam": "ipv4", 00:15:33.914 "trsvcid": "$NVMF_PORT", 00:15:33.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:33.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:33.914 "hdgst": ${hdgst:-false}, 00:15:33.914 "ddgst": ${ddgst:-false} 00:15:33.914 }, 00:15:33.914 "method": "bdev_nvme_attach_controller" 00:15:33.914 } 00:15:33.914 EOF 00:15:33.914 )") 00:15:33.914 14:20:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:34.172 14:20:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:34.172 14:20:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:34.172 14:20:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:34.172 "params": { 00:15:34.172 "name": "Nvme1", 00:15:34.172 "trtype": "tcp", 00:15:34.172 "traddr": "10.0.0.2", 00:15:34.172 "adrfam": "ipv4", 00:15:34.172 "trsvcid": "4420", 00:15:34.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:34.172 "hdgst": false, 00:15:34.172 "ddgst": false 00:15:34.172 }, 00:15:34.172 "method": "bdev_nvme_attach_controller" 00:15:34.172 }' 00:15:34.172 [2024-07-12 14:20:25.959343] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:15:34.172 [2024-07-12 14:20:25.959392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518608 ] 00:15:34.172 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.172 [2024-07-12 14:20:26.012782] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.172 [2024-07-12 14:20:26.085930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.431 Running I/O for 10 seconds... 00:15:46.643 00:15:46.643 Latency(us) 00:15:46.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.643 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:46.643 Verification LBA range: start 0x0 length 0x1000 00:15:46.643 Nvme1n1 : 10.01 8700.06 67.97 0.00 0.00 14669.67 1617.03 25644.52 00:15:46.643 =================================================================================================================== 00:15:46.643 Total : 8700.06 67.97 0.00 0.00 14669.67 1617.03 25644.52 00:15:46.643 14:20:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2520223 00:15:46.643 14:20:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:46.643 14:20:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:46.643 14:20:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:46.643 14:20:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:46.643 14:20:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:46.643 14:20:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:46.643 14:20:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:46.643 14:20:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:46.643 { 00:15:46.643 "params": { 00:15:46.643 "name": "Nvme$subsystem", 00:15:46.643 "trtype": "$TEST_TRANSPORT", 00:15:46.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:46.643 "adrfam": "ipv4", 00:15:46.643 "trsvcid": "$NVMF_PORT", 00:15:46.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:46.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:46.643 "hdgst": ${hdgst:-false}, 00:15:46.643 "ddgst": ${ddgst:-false} 00:15:46.643 }, 00:15:46.643 "method": "bdev_nvme_attach_controller" 00:15:46.643 } 00:15:46.643 EOF 00:15:46.643 )") 00:15:46.643 14:20:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:46.643 [2024-07-12 14:20:36.633757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.643 [2024-07-12 14:20:36.633794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.643 14:20:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:46.643 14:20:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:46.643 14:20:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:46.643 "params": { 00:15:46.643 "name": "Nvme1", 00:15:46.643 "trtype": "tcp", 00:15:46.643 "traddr": "10.0.0.2", 00:15:46.643 "adrfam": "ipv4", 00:15:46.643 "trsvcid": "4420", 00:15:46.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:46.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:46.644 "hdgst": false, 00:15:46.644 "ddgst": false 00:15:46.644 }, 00:15:46.644 "method": "bdev_nvme_attach_controller" 00:15:46.644 }' 00:15:46.644 [2024-07-12 14:20:36.645757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.645771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.653773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.653783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.665807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.665819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.667189] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:15:46.644 [2024-07-12 14:20:36.667232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520223 ] 00:15:46.644 [2024-07-12 14:20:36.677839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.677850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.689870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.689881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.644 [2024-07-12 14:20:36.701903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.701914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.713937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.713948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.721810] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.644 [2024-07-12 14:20:36.725972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.725983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.738003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.738016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.750036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.750048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.762066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.762086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.774097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.774111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.786128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.786138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.796661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.644 [2024-07-12 14:20:36.798160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.798173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.810199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.810216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.822225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.822241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.834260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.834273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.846289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.846302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.858320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.858331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.870351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.870361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.882400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.882432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.894437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.894455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.906457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.906472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.918491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.918504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.930519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.930530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.942548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.942559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.954596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.954610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.966629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.966642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.978659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.978669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:36.990680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:36.990691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:37.002711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:37.002723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:37.014747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:37.014760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:37.026779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:37.026789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:37.038818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:37.038834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:37.050850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:37.050864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:37.062876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:37.062886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:37.074913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:37.074923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:37.086949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:37.086959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:37.098982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.644 [2024-07-12 14:20:37.098993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.644 [2024-07-12 14:20:37.111022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.111040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 Running I/O for 5 seconds... 00:15:46.645 [2024-07-12 14:20:37.123055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.123071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.134678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.134699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.144266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.144284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.158796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.158816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.172295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.172314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.186496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.186516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.200087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.200106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.213676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.213695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.227439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.227458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.241543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.241562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.255619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.255638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.264520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.264539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.278938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.278961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.292697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.292716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.301561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.301580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.315515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.315533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.329163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.329182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.337950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.337968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.347040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.347059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.356152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.356170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.371140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.371159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.382622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.382641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.396581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.396600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.405239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.405258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.414609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.414629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.428895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.428914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.442044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.442063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.451063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.451082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.465322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.465341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.474054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.474071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.488386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.488404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.497231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.497253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.506226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.506245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.515265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.515283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.524567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.524585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.539017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.539037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.552752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.552771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.561845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.561863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.571068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.571086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.580397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.580416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.594722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.594742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.608392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.645 [2024-07-12 14:20:37.608420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.645 [2024-07-12 14:20:37.617240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.617260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.631608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.631627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.645054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.645073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.658903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.658922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.667861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.667879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.677188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.677207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.686486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.686505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.695715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.695733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.710675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.710698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.724793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.724812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.738654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.738673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.747542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.747560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.762078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.762098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.775593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.775613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.784512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.784530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.793212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.793230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.802479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.802497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.816784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.816803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.831020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.831039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.841686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.841705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.855768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.855786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.869839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.869858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.883813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.883832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.897741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.897760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.911610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.911629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.920444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.920463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.929070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.929088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.938195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.938215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.952420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.952441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.961188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.961207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.975299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.975319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:37.988579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:37.988597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:38.002435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:38.002454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:38.016407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:38.016426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:38.025620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:38.025638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:38.040016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:38.040034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:38.053499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:38.053519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:38.062342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:38.062361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:38.076416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:38.076436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:38.090115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:38.090135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:38.103849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:38.103869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:38.112910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:38.112930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:38.127205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:38.127225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.646 [2024-07-12 14:20:38.140907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.646 [2024-07-12 14:20:38.140926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.150009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.150028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.158813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.158832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.168023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.168042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.182980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.182999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.198408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.198428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.212806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.212825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.228285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.228305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.242342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.242361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.256133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.256152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.269907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.269927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.278706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.278725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.287946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.287965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.297667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.297686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.306404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.306423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.320455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.320475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.333971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.333990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.343114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.343132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.352471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.352490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.361769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.361788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.375891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.375910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.384834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.384853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.398988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.399007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.412500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.412518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.421469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.421488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.435611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.435630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.444363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.444388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.453505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.453524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.462609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.462627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.471431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.471449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.485847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.485865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.494689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.494707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.504051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.504070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.518274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.518292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.531875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.531894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.545784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.545804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.554671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.554690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.563389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.563407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.571950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.571968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.581131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.581149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.595511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.595530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.608897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.608915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.617765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.617783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.626930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.626949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.647 [2024-07-12 14:20:38.636134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.647 [2024-07-12 14:20:38.636153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.906 [2024-07-12 14:20:38.650635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.906 [2024-07-12 14:20:38.650655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.906 [2024-07-12 14:20:38.659722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.906 [2024-07-12 14:20:38.659742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.906 [2024-07-12 14:20:38.673922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.906 [2024-07-12 14:20:38.673940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.906 [2024-07-12 14:20:38.687527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.906 [2024-07-12 14:20:38.687546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.906 [2024-07-12 14:20:38.701464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.906 [2024-07-12 14:20:38.701484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.906 [2024-07-12 14:20:38.715121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.906 [2024-07-12 14:20:38.715139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.906 [2024-07-12 14:20:38.724320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.906 [2024-07-12 14:20:38.724338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.906 [2024-07-12 14:20:38.738577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.906 [2024-07-12 14:20:38.738596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.906 [2024-07-12 14:20:38.747597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.906 [2024-07-12 14:20:38.747616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.906 [2024-07-12 14:20:38.756258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.906 [2024-07-12 14:20:38.756276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.906 [2024-07-12 14:20:38.770570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.907 [2024-07-12 14:20:38.770590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.907 [2024-07-12 14:20:38.784322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.907 [2024-07-12 14:20:38.784342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.907 [2024-07-12 14:20:38.798227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.907 [2024-07-12 14:20:38.798247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.907 [2024-07-12 14:20:38.807053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.907 [2024-07-12 14:20:38.807071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.907 [2024-07-12 14:20:38.821126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.907 [2024-07-12 14:20:38.821148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.907 [2024-07-12 14:20:38.834935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.907 [2024-07-12 14:20:38.834954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.907 [2024-07-12 14:20:38.848387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.907 [2024-07-12 14:20:38.848406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.907 [2024-07-12 14:20:38.862346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.907 [2024-07-12 14:20:38.862365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.907 [2024-07-12 14:20:38.871307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.907 [2024-07-12 14:20:38.871326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.907 [2024-07-12 14:20:38.880263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.907 [2024-07-12 14:20:38.880282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.907 [2024-07-12 14:20:38.894745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.907 [2024-07-12 14:20:38.894764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.907 [2024-07-12 14:20:38.908395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.907 [2024-07-12 14:20:38.908413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:38.922373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:38.922398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:38.936128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:38.936147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:38.949991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:38.950010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:38.963826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:38.963846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:38.972725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:38.972744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:38.986833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:38.986852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:38.995776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:38.995795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:39.009919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:39.009938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:39.023613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:39.023632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:39.032627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:39.032646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:39.041769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:39.041788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:39.050980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:39.051002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:39.059855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:39.059875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:39.074061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:39.074080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:39.087401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:39.087421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:39.096207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:39.096226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:39.110577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:39.110596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:39.119382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:39.119401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:39.133502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:39.133521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:39.146919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:39.146938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:39.160657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:39.160676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.166 [2024-07-12 14:20:39.174853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.166 [2024-07-12 14:20:39.174872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.188446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.188465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.202205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.202225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.216060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.216080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.229764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.229782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.243648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.243667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.257622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.257640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.271584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.271614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.280615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.280633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.289392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.289415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.298768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.298787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.308562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.308581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.322549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.322568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.336002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.336022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.344774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.344793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.354082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.354100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.363335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.363354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.378069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.378088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.393268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.393287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.407473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.407492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.416403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.416421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.425 [2024-07-12 14:20:39.430964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.425 [2024-07-12 14:20:39.430983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.444758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.444777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.458564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.458584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.472066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.472085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.486181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.486201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.500315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.500335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.514414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.514434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.528632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.528655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.539763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.539782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.548552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.548571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.562921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.562939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.576616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.576635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.585283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.585302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.594718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.594737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.609197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.609216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.618029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.618048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.632301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.632320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.641142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.641161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.650070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.650089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.659260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.659279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.668441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.668460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.684 [2024-07-12 14:20:39.682957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.684 [2024-07-12 14:20:39.682977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.696664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.696685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.705473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.705492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.714715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.714735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.724584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.724604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.739208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.739228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.748017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.748036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.762375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.762402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.775245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.775265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.789483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.789502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.803034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.803055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.816972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.816991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.825873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.825892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.840138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.840157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.853512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.853531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.867709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.867728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.876663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.876681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.885357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.885376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.894671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.894690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.903889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.903908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.918264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.918282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.931772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.931790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.943 [2024-07-12 14:20:39.945634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.943 [2024-07-12 14:20:39.945653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.202 [2024-07-12 14:20:39.959693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.202 [2024-07-12 14:20:39.959713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.202 [2024-07-12 14:20:39.968701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.202 [2024-07-12 14:20:39.968719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.202 [2024-07-12 14:20:39.982924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.202 [2024-07-12 14:20:39.982942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.202 [2024-07-12 14:20:39.991611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.202 [2024-07-12 14:20:39.991629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.202 [2024-07-12 14:20:40.000842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.202 [2024-07-12 14:20:40.000860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.202 [2024-07-12 14:20:40.016692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.202 [2024-07-12 14:20:40.016713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.202 [2024-07-12 14:20:40.031721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.202 [2024-07-12 14:20:40.031744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.202 [2024-07-12 14:20:40.046190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.202 [2024-07-12 14:20:40.046210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.060328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.060983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.069157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.069176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.079848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.079867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.088548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.088567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.098000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.098019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.107239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.107258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.116465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.116484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.125121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.125140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.134420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.134438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.143832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.143850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.153180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.153198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.162107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.162125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.171443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.171461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.180240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.180258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.189297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.189315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.197952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.197970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.203 [2024-07-12 14:20:40.205177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.203 [2024-07-12 14:20:40.205196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.216189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.216209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.225086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.225105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.233717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.233736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.242825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.242844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.251930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.251948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.260620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.260638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.269990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.270009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.279455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.279473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.288724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.288743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.297239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.297257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.306317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.306336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.315576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.315595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.324565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.324583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.333674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.333692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.342435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.342454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.351642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.351661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.360252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.360270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.369449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.369468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.379190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.379209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.387998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.388016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.397078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.397096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.403926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.403944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.414169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.414188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.422944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.462 [2024-07-12 14:20:40.422963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.462 [2024-07-12 14:20:40.432632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.463 [2024-07-12 14:20:40.432651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.463 [2024-07-12 14:20:40.441341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.463 [2024-07-12 14:20:40.441359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.463 [2024-07-12 14:20:40.450606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.463 [2024-07-12 14:20:40.450625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.463 [2024-07-12 14:20:40.457398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.463 [2024-07-12 14:20:40.457417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.463 [2024-07-12 14:20:40.468776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.463 [2024-07-12 14:20:40.468795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.477615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.477634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.486257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.486276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.495412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.495430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.504658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.504680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.514243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.514261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.522996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.523015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.532154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.532171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.541510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.541528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.549839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.549857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.558540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.558558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.567784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.567802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.576199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.576217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.585241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.585260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.593770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.593788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.602387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.602405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.610940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.610958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.620145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.620164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.628646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.628664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.637689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.637708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.722 [2024-07-12 14:20:40.646810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.722 [2024-07-12 14:20:40.646829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.723 [2024-07-12 14:20:40.656165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.723 [2024-07-12 14:20:40.656184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.723 [2024-07-12 14:20:40.664760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.723 [2024-07-12 14:20:40.664778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.723 [2024-07-12 14:20:40.674579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.723 [2024-07-12 14:20:40.674601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.723 [2024-07-12 14:20:40.683723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.723 [2024-07-12 14:20:40.683742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.723 [2024-07-12 14:20:40.692901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.723 [2024-07-12 14:20:40.692920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.723 [2024-07-12 14:20:40.701913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.723 [2024-07-12 14:20:40.701931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.723 [2024-07-12 14:20:40.710321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.723 [2024-07-12 14:20:40.710340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.723 [2024-07-12 14:20:40.718946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.723 [2024-07-12 14:20:40.718965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.723 [2024-07-12 14:20:40.725894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.723 [2024-07-12 14:20:40.725912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.737104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.737122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.745918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.745937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.754521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.754540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.763645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.763663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.772741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.772760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.781984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.782002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.790535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.790553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.799533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.799552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.808901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.808919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.818125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.818144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.827518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.827537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.836049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.836068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.845347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.845383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.854716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.854735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.863588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.863606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.872269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.872289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.881473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.881492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.890639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.890660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.899652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.899672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.909144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.909163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.918246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.918266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.927486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.927505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.936618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.936636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.945203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.945222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.954783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.954801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.963491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.963510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.972535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.972554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.981005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.981023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.982 [2024-07-12 14:20:40.989522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.982 [2024-07-12 14:20:40.989540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:40.998337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:40.998356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.007437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.007456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.016812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.016836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.025302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.025321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.034401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.034420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.043431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.043450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.052682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.052701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.061724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.061742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.070417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.070436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.078855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.078873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.087479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.087498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.096519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.096538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.105141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.105160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.114164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.114183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.121272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.121290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.131486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.131505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.140122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.140141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.148848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.148866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.158019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.158038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.167234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.167253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.176539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.176558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.185779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.185799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.194888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.194907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.203973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.203991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.213020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.213039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.222210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.222229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.230735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.230753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.239908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.239926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.241 [2024-07-12 14:20:41.249407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.241 [2024-07-12 14:20:41.249426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.258562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.258581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.267234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.267252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.276390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.276408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.285398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.285416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.293842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.293861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.302930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.302949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.311832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.311850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.320965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.320985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.330271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.330290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.338758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.338777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.347708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.347728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.356968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.356987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.366654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.366672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.375391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.375409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.383896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.383914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.393122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.393140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.401669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.401686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.410807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.410825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.419797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.419815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.428297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.428315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.437343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.437361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.446512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.446531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.455569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.455587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.464679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.464696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.473694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.473712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.482657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.482675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.491696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.491714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.500 [2024-07-12 14:20:41.500820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.500 [2024-07-12 14:20:41.500838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.510071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.510090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.519242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.519261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.528348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.528366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.542764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.542783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.551731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.551749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.561014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.561032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.569590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.569609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.578640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.578658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.587110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.587128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.596247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.596266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.605355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.605373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.614266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.614284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.623232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.623250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.632272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.632291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.641371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.641395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.650313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.650332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.659388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.659406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.666317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.666335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.676963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.676982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.685514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.685533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.694609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.694628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.703773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.703792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.712313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.712332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.721387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.721422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.729983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.730002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.738443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.738461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.747688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.747706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.757093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.757111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.759 [2024-07-12 14:20:41.765525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.759 [2024-07-12 14:20:41.765543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.018 [2024-07-12 14:20:41.775026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.018 [2024-07-12 14:20:41.775044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.018 [2024-07-12 14:20:41.783765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.018 [2024-07-12 14:20:41.783784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.018 [2024-07-12 14:20:41.793349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.018 [2024-07-12 14:20:41.793367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.018 [2024-07-12 14:20:41.801874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.018 [2024-07-12 14:20:41.801893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.018 [2024-07-12 14:20:41.811048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.018 [2024-07-12 14:20:41.811067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.018 [2024-07-12 14:20:41.819545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.018 [2024-07-12 14:20:41.819563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.018 [2024-07-12 14:20:41.828019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.018 [2024-07-12 14:20:41.828037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.018 [2024-07-12 14:20:41.837293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.018 [2024-07-12 14:20:41.837313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.018 [2024-07-12 14:20:41.846568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.018 [2024-07-12 14:20:41.846587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:41.855561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:41.855579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:41.864521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:41.864543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:41.873636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:41.873655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:41.882066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:41.882085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:41.891211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:41.891229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:41.900353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:41.900371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:41.909502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:41.909520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:41.918620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:41.918639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:41.927774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:41.927792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:41.936784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:41.936801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:41.946885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:41.946904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:41.955421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:41.955439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:41.964193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:41.964211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:41.973299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:41.973316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:41.982374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:41.982399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:41.991357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:41.991375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:42.000300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:42.000319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:42.009461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:42.009479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:42.018622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:42.018641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.019 [2024-07-12 14:20:42.027146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.019 [2024-07-12 14:20:42.027166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.278 [2024-07-12 14:20:42.036285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.278 [2024-07-12 14:20:42.036308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.278 [2024-07-12 14:20:42.044791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.278 [2024-07-12 14:20:42.044810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.278 [2024-07-12 14:20:42.054402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.278 [2024-07-12 14:20:42.054420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.278 [2024-07-12 14:20:42.063100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.063118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.072403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.072422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.081841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.081860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.090436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.090454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.099805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.099824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.108304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.108322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.116851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.116868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.125993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.126012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.134452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.134471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 00:15:50.279 Latency(us) 00:15:50.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.279 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:50.279 Nvme1n1 : 5.01 16826.30 131.46 0.00 0.00 7599.23 3262.55 15500.69 00:15:50.279 =================================================================================================================== 00:15:50.279 Total : 16826.30 131.46 0.00 0.00 7599.23 3262.55 15500.69 00:15:50.279 [2024-07-12 14:20:42.140875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.140892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.148895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.148909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.156912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.156923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.164940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.164955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.172962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.172982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.180978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.180990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.189000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.189011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.197021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.197032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.205044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.205055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.213065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.213076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.221087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.221097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.229109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.229120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.237132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.237142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.245150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.245160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.253173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.253184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.261196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.261207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.269217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.269229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.277239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.277250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.279 [2024-07-12 14:20:42.285261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.279 [2024-07-12 14:20:42.285272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.538 [2024-07-12 14:20:42.293283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.538 [2024-07-12 14:20:42.293294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.538 [2024-07-12 14:20:42.301303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.538 [2024-07-12 14:20:42.301316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.538 [2024-07-12 14:20:42.309326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.538 [2024-07-12 14:20:42.309337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.538 [2024-07-12 14:20:42.317348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.538 [2024-07-12 14:20:42.317358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2520223) - No such process 00:15:50.538 14:20:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2520223 00:15:50.538 14:20:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.538 14:20:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.538 14:20:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:50.538 14:20:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.538 14:20:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:50.538 14:20:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.538 14:20:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:50.538 delay0 00:15:50.538 14:20:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.539 14:20:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:50.539 14:20:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.539 14:20:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:50.539 14:20:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.539 14:20:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:50.539 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.539 [2024-07-12 14:20:42.448024] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:57.104 Initializing NVMe Controllers 00:15:57.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:57.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:57.104 Initialization complete. Launching workers. 00:15:57.104 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 909 00:15:57.104 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1196, failed to submit 33 00:15:57.104 success 1027, unsuccess 169, failed 0 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:57.104 rmmod nvme_tcp 00:15:57.104 rmmod nvme_fabrics 00:15:57.104 rmmod nvme_keyring 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2518365 ']' 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2518365 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2518365 ']' 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2518365 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2518365 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2518365' 00:15:57.104 killing process with pid 2518365 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2518365 00:15:57.104 14:20:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2518365 00:15:57.104 14:20:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:57.104 14:20:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:57.104 14:20:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:57.104 14:20:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:57.104 14:20:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:57.104 14:20:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.104 14:20:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.104 14:20:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.639 14:20:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:59.639 00:15:59.639 real 0m31.568s 00:15:59.639 user 0m43.516s 00:15:59.639 sys 0m10.366s 00:15:59.639 14:20:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:59.639 14:20:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:59.639 ************************************ 00:15:59.639 END TEST nvmf_zcopy 00:15:59.639 ************************************ 00:15:59.639 14:20:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:59.639 14:20:51 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:59.639 14:20:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:59.639 14:20:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:59.639 14:20:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:59.639 ************************************ 00:15:59.639 START TEST nvmf_nmic 00:15:59.639 ************************************ 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:59.639 * Looking for test storage... 00:15:59.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.639 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:59.640 14:20:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:04.910 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:04.910 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:04.910 Found net devices under 0000:86:00.0: cvl_0_0 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:04.910 Found net devices under 0000:86:00.1: cvl_0_1 00:16:04.910 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:04.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:16:04.911 00:16:04.911 --- 10.0.0.2 ping statistics --- 00:16:04.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.911 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:04.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:16:04.911 00:16:04.911 --- 10.0.0.1 ping statistics --- 00:16:04.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.911 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2525623 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2525623 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2525623 ']' 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.911 14:20:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:04.911 [2024-07-12 14:20:56.463735] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:16:04.911 [2024-07-12 14:20:56.463779] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.911 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.911 [2024-07-12 14:20:56.521129] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:04.911 [2024-07-12 14:20:56.602150] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.911 [2024-07-12 14:20:56.602186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.911 [2024-07-12 14:20:56.602194] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.911 [2024-07-12 14:20:56.602200] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.911 [2024-07-12 14:20:56.602205] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.911 [2024-07-12 14:20:56.602247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.911 [2024-07-12 14:20:56.602345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.911 [2024-07-12 14:20:56.602409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:04.911 [2024-07-12 14:20:56.602410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:05.480 [2024-07-12 14:20:57.320503] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:05.480 Malloc0 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:05.480 [2024-07-12 14:20:57.364431] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:05.480 test case1: single bdev can't be used in multiple subsystems 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:05.480 [2024-07-12 14:20:57.388346] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:05.480 [2024-07-12 14:20:57.388367] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:05.480 [2024-07-12 14:20:57.388375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.480 request: 00:16:05.480 { 00:16:05.480 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:05.480 "namespace": { 00:16:05.480 "bdev_name": "Malloc0", 00:16:05.480 "no_auto_visible": false 00:16:05.480 }, 00:16:05.480 "method": "nvmf_subsystem_add_ns", 00:16:05.480 "req_id": 1 00:16:05.480 } 00:16:05.480 Got JSON-RPC error response 00:16:05.480 response: 00:16:05.480 { 00:16:05.480 "code": -32602, 00:16:05.480 "message": "Invalid parameters" 00:16:05.480 } 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:05.480 Adding namespace failed - expected result. 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:05.480 test case2: host connect to nvmf target in multiple paths 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:05.480 [2024-07-12 14:20:57.396489] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.480 14:20:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:06.869 14:20:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:07.886 14:20:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:07.886 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:07.886 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.886 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:07.886 14:20:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:09.789 14:21:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:09.789 14:21:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:09.789 14:21:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:09.789 14:21:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:09.789 14:21:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:09.789 14:21:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:09.789 14:21:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:09.789 [global] 00:16:09.789 thread=1 00:16:09.789 invalidate=1 00:16:09.789 rw=write 00:16:09.789 time_based=1 00:16:09.789 runtime=1 00:16:09.789 ioengine=libaio 00:16:09.789 direct=1 00:16:09.789 bs=4096 00:16:09.789 iodepth=1 00:16:09.789 norandommap=0 00:16:09.789 numjobs=1 00:16:09.789 00:16:09.789 verify_dump=1 00:16:09.789 verify_backlog=512 00:16:09.789 verify_state_save=0 00:16:09.789 do_verify=1 00:16:09.789 verify=crc32c-intel 00:16:09.789 [job0] 00:16:09.789 filename=/dev/nvme0n1 00:16:09.789 Could not set queue depth (nvme0n1) 00:16:10.048 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.048 fio-3.35 00:16:10.048 Starting 1 thread 00:16:11.424 00:16:11.424 job0: (groupid=0, jobs=1): err= 0: pid=2526775: Fri Jul 12 14:21:03 2024 00:16:11.424 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:16:11.424 slat (nsec): min=10105, max=25280, avg=21775.27, stdev=2751.23 00:16:11.424 clat (usec): min=40897, max=41161, avg=40976.15, stdev=47.88 00:16:11.424 lat (usec): min=40920, max=41187, avg=40997.93, stdev=48.38 00:16:11.424 clat percentiles (usec): 00:16:11.424 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:11.424 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:11.424 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:11.424 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:11.424 | 99.99th=[41157] 00:16:11.424 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:16:11.424 slat (nsec): min=10447, max=46834, avg=11712.65, stdev=2736.74 00:16:11.424 clat (usec): min=136, max=283, avg=185.94, stdev=42.88 00:16:11.424 lat (usec): min=147, max=328, avg=197.65, stdev=43.21 00:16:11.424 clat percentiles (usec): 00:16:11.424 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 149], 00:16:11.424 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 192], 00:16:11.424 | 70.00th=[ 239], 80.00th=[ 241], 90.00th=[ 241], 95.00th=[ 243], 00:16:11.424 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 285], 99.95th=[ 285], 00:16:11.424 | 99.99th=[ 285] 00:16:11.424 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:11.424 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:11.424 lat (usec) : 250=95.51%, 500=0.37% 00:16:11.424 lat (msec) : 50=4.12% 00:16:11.424 cpu : usr=0.50%, sys=0.90%, ctx=534, majf=0, minf=2 00:16:11.424 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.424 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.424 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:11.424 00:16:11.424 Run status group 0 (all jobs): 00:16:11.424 READ: bw=87.6KiB/s (89.7kB/s), 87.6KiB/s-87.6KiB/s (89.7kB/s-89.7kB/s), io=88.0KiB (90.1kB), run=1005-1005msec 00:16:11.424 WRITE: bw=2038KiB/s (2087kB/s), 2038KiB/s-2038KiB/s (2087kB/s-2087kB/s), io=2048KiB (2097kB), run=1005-1005msec 00:16:11.424 00:16:11.424 Disk stats (read/write): 00:16:11.424 nvme0n1: ios=69/512, merge=0/0, ticks=995/93, in_queue=1088, util=95.39% 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:11.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:11.424 rmmod nvme_tcp 00:16:11.424 rmmod nvme_fabrics 00:16:11.424 rmmod nvme_keyring 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2525623 ']' 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2525623 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2525623 ']' 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2525623 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2525623 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2525623' 00:16:11.424 killing process with pid 2525623 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2525623 00:16:11.424 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2525623 00:16:11.683 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:11.683 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:11.683 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:11.683 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.683 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.683 14:21:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.683 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.683 14:21:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.216 14:21:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:14.216 00:16:14.216 real 0m14.480s 00:16:14.216 user 0m34.417s 00:16:14.216 sys 0m4.583s 00:16:14.216 14:21:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:14.216 14:21:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:14.216 ************************************ 00:16:14.216 END TEST nvmf_nmic 00:16:14.216 ************************************ 00:16:14.216 14:21:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:14.216 14:21:05 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:14.216 14:21:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:14.216 14:21:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.216 14:21:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:14.216 ************************************ 00:16:14.216 START TEST nvmf_fio_target 00:16:14.216 ************************************ 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:14.216 * Looking for test storage... 00:16:14.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:14.216 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:14.217 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:14.217 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.217 14:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.217 14:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.217 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:14.217 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:14.217 14:21:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:14.217 14:21:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.487 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:19.487 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:19.487 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:19.487 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:19.487 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:19.487 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:19.487 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:19.487 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:19.487 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:19.487 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:19.488 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:19.488 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:19.488 Found net devices under 0000:86:00.0: cvl_0_0 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:19.488 Found net devices under 0000:86:00.1: cvl_0_1 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:19.488 14:21:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:19.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:16:19.488 00:16:19.488 --- 10.0.0.2 ping statistics --- 00:16:19.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.488 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:19.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:16:19.488 00:16:19.488 --- 10.0.0.1 ping statistics --- 00:16:19.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.488 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2530918 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2530918 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2530918 ']' 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:19.488 14:21:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.488 [2024-07-12 14:21:11.248827] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:16:19.488 [2024-07-12 14:21:11.248867] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.488 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.488 [2024-07-12 14:21:11.305606] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.488 [2024-07-12 14:21:11.378785] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.488 [2024-07-12 14:21:11.378824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.488 [2024-07-12 14:21:11.378831] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.488 [2024-07-12 14:21:11.378836] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.488 [2024-07-12 14:21:11.378841] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.488 [2024-07-12 14:21:11.378884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.488 [2024-07-12 14:21:11.378982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.488 [2024-07-12 14:21:11.379043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.488 [2024-07-12 14:21:11.379045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.056 14:21:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:20.056 14:21:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:16:20.056 14:21:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:20.056 14:21:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:20.056 14:21:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.314 14:21:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.314 14:21:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:20.314 [2024-07-12 14:21:12.257795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.314 14:21:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:20.572 14:21:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:20.572 14:21:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:20.830 14:21:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:20.830 14:21:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:21.087 14:21:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:21.087 14:21:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:21.087 14:21:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:21.088 14:21:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:21.344 14:21:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:21.601 14:21:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:21.601 14:21:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:21.859 14:21:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:21.859 14:21:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:21.859 14:21:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:21.859 14:21:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:22.116 14:21:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:22.374 14:21:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:22.374 14:21:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:22.374 14:21:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:22.374 14:21:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:22.632 14:21:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.890 [2024-07-12 14:21:14.703649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.890 14:21:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:23.148 14:21:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:23.148 14:21:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:24.525 14:21:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:24.525 14:21:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:16:24.525 14:21:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:24.525 14:21:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:16:24.525 14:21:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:16:24.525 14:21:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:16:26.426 14:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:26.426 14:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:26.426 14:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:26.426 14:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:16:26.426 14:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:26.426 14:21:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:16:26.426 14:21:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:26.426 [global] 00:16:26.426 thread=1 00:16:26.426 invalidate=1 00:16:26.426 rw=write 00:16:26.426 time_based=1 00:16:26.426 runtime=1 00:16:26.426 ioengine=libaio 00:16:26.426 direct=1 00:16:26.426 bs=4096 00:16:26.426 iodepth=1 00:16:26.426 norandommap=0 00:16:26.426 numjobs=1 00:16:26.426 00:16:26.426 verify_dump=1 00:16:26.426 verify_backlog=512 00:16:26.426 verify_state_save=0 00:16:26.426 do_verify=1 00:16:26.426 verify=crc32c-intel 00:16:26.426 [job0] 00:16:26.426 filename=/dev/nvme0n1 00:16:26.426 [job1] 00:16:26.426 filename=/dev/nvme0n2 00:16:26.426 [job2] 00:16:26.426 filename=/dev/nvme0n3 00:16:26.426 [job3] 00:16:26.426 filename=/dev/nvme0n4 00:16:26.426 Could not set queue depth (nvme0n1) 00:16:26.426 Could not set queue depth (nvme0n2) 00:16:26.426 Could not set queue depth (nvme0n3) 00:16:26.426 Could not set queue depth (nvme0n4) 00:16:26.684 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.684 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.684 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.684 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.684 fio-3.35 00:16:26.684 Starting 4 threads 00:16:28.061 00:16:28.061 job0: (groupid=0, jobs=1): err= 0: pid=2532272: Fri Jul 12 14:21:19 2024 00:16:28.061 read: IOPS=2062, BW=8252KiB/s (8450kB/s)(8260KiB/1001msec) 00:16:28.061 slat (nsec): min=6310, max=27333, avg=7172.91, stdev=1044.15 00:16:28.061 clat (usec): min=203, max=452, avg=244.51, stdev=13.04 00:16:28.061 lat (usec): min=210, max=460, avg=251.68, stdev=13.00 00:16:28.061 clat percentiles (usec): 00:16:28.061 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 233], 00:16:28.061 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:16:28.061 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 260], 95.00th=[ 265], 00:16:28.061 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 306], 00:16:28.061 | 99.99th=[ 453] 00:16:28.061 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:28.061 slat (nsec): min=8205, max=47323, avg=10493.76, stdev=1422.34 00:16:28.061 clat (usec): min=129, max=426, avg=173.02, stdev=24.54 00:16:28.061 lat (usec): min=139, max=437, avg=183.51, stdev=24.72 00:16:28.061 clat percentiles (usec): 00:16:28.061 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:16:28.061 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:16:28.061 | 70.00th=[ 176], 80.00th=[ 186], 90.00th=[ 204], 95.00th=[ 221], 00:16:28.061 | 99.00th=[ 262], 99.50th=[ 281], 99.90th=[ 367], 99.95th=[ 375], 00:16:28.061 | 99.99th=[ 429] 00:16:28.061 bw ( KiB/s): min=10035, max=10035, per=42.80%, avg=10035.00, stdev= 0.00, samples=1 00:16:28.061 iops : min= 2508, max= 2508, avg=2508.00, stdev= 0.00, samples=1 00:16:28.061 lat (usec) : 250=83.87%, 500=16.13% 00:16:28.061 cpu : usr=2.70%, sys=4.00%, ctx=4627, majf=0, minf=1 00:16:28.061 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.061 issued rwts: total=2065,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.061 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:28.061 job1: (groupid=0, jobs=1): err= 0: pid=2532273: Fri Jul 12 14:21:19 2024 00:16:28.061 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:28.061 slat (nsec): min=6474, max=15718, avg=7361.19, stdev=513.52 00:16:28.061 clat (usec): min=215, max=418, avg=258.50, stdev=21.50 00:16:28.061 lat (usec): min=222, max=424, avg=265.86, stdev=21.55 00:16:28.061 clat percentiles (usec): 00:16:28.061 | 1.00th=[ 227], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 243], 00:16:28.061 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 260], 00:16:28.061 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 306], 00:16:28.061 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 355], 99.95th=[ 355], 00:16:28.061 | 99.99th=[ 420] 00:16:28.061 write: IOPS=2468, BW=9874KiB/s (10.1MB/s)(9884KiB/1001msec); 0 zone resets 00:16:28.061 slat (usec): min=9, max=1272, avg=11.71, stdev=31.46 00:16:28.061 clat (usec): min=121, max=900, avg=168.21, stdev=35.38 00:16:28.061 lat (usec): min=132, max=1440, avg=179.92, stdev=47.62 00:16:28.061 clat percentiles (usec): 00:16:28.061 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:16:28.061 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:16:28.061 | 70.00th=[ 169], 80.00th=[ 182], 90.00th=[ 206], 95.00th=[ 233], 00:16:28.061 | 99.00th=[ 269], 99.50th=[ 302], 99.90th=[ 627], 99.95th=[ 660], 00:16:28.061 | 99.99th=[ 898] 00:16:28.061 bw ( KiB/s): min= 9328, max= 9328, per=39.78%, avg=9328.00, stdev= 0.00, samples=1 00:16:28.061 iops : min= 2332, max= 2332, avg=2332.00, stdev= 0.00, samples=1 00:16:28.061 lat (usec) : 250=70.92%, 500=28.99%, 750=0.07%, 1000=0.02% 00:16:28.061 cpu : usr=2.60%, sys=4.10%, ctx=4522, majf=0, minf=2 00:16:28.061 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.061 issued rwts: total=2048,2471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.061 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:28.061 job2: (groupid=0, jobs=1): err= 0: pid=2532274: Fri Jul 12 14:21:19 2024 00:16:28.061 read: IOPS=40, BW=163KiB/s (167kB/s)(168KiB/1033msec) 00:16:28.061 slat (nsec): min=7099, max=24034, avg=15192.81, stdev=7559.25 00:16:28.061 clat (usec): min=357, max=42051, avg=21730.08, stdev=20596.75 00:16:28.061 lat (usec): min=364, max=42074, avg=21745.27, stdev=20602.65 00:16:28.061 clat percentiles (usec): 00:16:28.061 | 1.00th=[ 359], 5.00th=[ 367], 10.00th=[ 371], 20.00th=[ 371], 00:16:28.061 | 30.00th=[ 375], 40.00th=[ 392], 50.00th=[40109], 60.00th=[41157], 00:16:28.061 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:16:28.061 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:28.061 | 99.99th=[42206] 00:16:28.061 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:16:28.061 slat (nsec): min=9593, max=68702, avg=11075.03, stdev=3276.73 00:16:28.061 clat (usec): min=139, max=946, avg=219.72, stdev=65.77 00:16:28.061 lat (usec): min=149, max=958, avg=230.79, stdev=66.18 00:16:28.061 clat percentiles (usec): 00:16:28.061 | 1.00th=[ 153], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 182], 00:16:28.061 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 208], 60.00th=[ 229], 00:16:28.061 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 273], 00:16:28.061 | 99.00th=[ 388], 99.50th=[ 644], 99.90th=[ 947], 99.95th=[ 947], 00:16:28.061 | 99.99th=[ 947] 00:16:28.061 bw ( KiB/s): min= 4096, max= 4096, per=17.47%, avg=4096.00, stdev= 0.00, samples=1 00:16:28.061 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:28.061 lat (usec) : 250=82.31%, 500=12.64%, 750=0.72%, 1000=0.36% 00:16:28.061 lat (msec) : 50=3.97% 00:16:28.061 cpu : usr=0.10%, sys=0.68%, ctx=554, majf=0, minf=1 00:16:28.061 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.061 issued rwts: total=42,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.061 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:28.061 job3: (groupid=0, jobs=1): err= 0: pid=2532276: Fri Jul 12 14:21:19 2024 00:16:28.061 read: IOPS=29, BW=118KiB/s (120kB/s)(120KiB/1020msec) 00:16:28.061 slat (nsec): min=7042, max=24662, avg=18900.23, stdev=6949.41 00:16:28.061 clat (usec): min=219, max=42071, avg=30243.85, stdev=18359.30 00:16:28.061 lat (usec): min=242, max=42094, avg=30262.75, stdev=18360.95 00:16:28.061 clat percentiles (usec): 00:16:28.061 | 1.00th=[ 221], 5.00th=[ 247], 10.00th=[ 285], 20.00th=[ 367], 00:16:28.061 | 30.00th=[40633], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:16:28.061 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:16:28.061 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:28.061 | 99.99th=[42206] 00:16:28.061 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:16:28.061 slat (nsec): min=4645, max=38439, avg=10901.97, stdev=2446.58 00:16:28.061 clat (usec): min=140, max=423, avg=202.97, stdev=38.19 00:16:28.061 lat (usec): min=144, max=456, avg=213.87, stdev=38.66 00:16:28.061 clat percentiles (usec): 00:16:28.061 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 172], 00:16:28.061 | 30.00th=[ 178], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 200], 00:16:28.061 | 70.00th=[ 225], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 258], 00:16:28.061 | 99.00th=[ 326], 99.50th=[ 359], 99.90th=[ 424], 99.95th=[ 424], 00:16:28.061 | 99.99th=[ 424] 00:16:28.061 bw ( KiB/s): min= 4087, max= 4087, per=17.43%, avg=4087.00, stdev= 0.00, samples=1 00:16:28.061 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:16:28.061 lat (usec) : 250=87.08%, 500=8.86% 00:16:28.061 lat (msec) : 50=4.06% 00:16:28.061 cpu : usr=0.29%, sys=0.49%, ctx=543, majf=0, minf=1 00:16:28.061 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.061 issued rwts: total=30,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.061 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:28.061 00:16:28.061 Run status group 0 (all jobs): 00:16:28.061 READ: bw=15.8MiB/s (16.6MB/s), 118KiB/s-8252KiB/s (120kB/s-8450kB/s), io=16.3MiB (17.1MB), run=1001-1033msec 00:16:28.061 WRITE: bw=22.9MiB/s (24.0MB/s), 1983KiB/s-9.99MiB/s (2030kB/s-10.5MB/s), io=23.7MiB (24.8MB), run=1001-1033msec 00:16:28.061 00:16:28.061 Disk stats (read/write): 00:16:28.061 nvme0n1: ios=1911/2048, merge=0/0, ticks=759/348, in_queue=1107, util=98.00% 00:16:28.061 nvme0n2: ios=1828/2048, merge=0/0, ticks=638/334, in_queue=972, util=98.48% 00:16:28.061 nvme0n3: ios=37/512, merge=0/0, ticks=709/111, in_queue=820, util=88.96% 00:16:28.061 nvme0n4: ios=83/512, merge=0/0, ticks=1159/104, in_queue=1263, util=98.43% 00:16:28.061 14:21:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:28.061 [global] 00:16:28.061 thread=1 00:16:28.061 invalidate=1 00:16:28.061 rw=randwrite 00:16:28.061 time_based=1 00:16:28.061 runtime=1 00:16:28.061 ioengine=libaio 00:16:28.061 direct=1 00:16:28.061 bs=4096 00:16:28.061 iodepth=1 00:16:28.061 norandommap=0 00:16:28.061 numjobs=1 00:16:28.061 00:16:28.061 verify_dump=1 00:16:28.062 verify_backlog=512 00:16:28.062 verify_state_save=0 00:16:28.062 do_verify=1 00:16:28.062 verify=crc32c-intel 00:16:28.062 [job0] 00:16:28.062 filename=/dev/nvme0n1 00:16:28.062 [job1] 00:16:28.062 filename=/dev/nvme0n2 00:16:28.062 [job2] 00:16:28.062 filename=/dev/nvme0n3 00:16:28.062 [job3] 00:16:28.062 filename=/dev/nvme0n4 00:16:28.062 Could not set queue depth (nvme0n1) 00:16:28.062 Could not set queue depth (nvme0n2) 00:16:28.062 Could not set queue depth (nvme0n3) 00:16:28.062 Could not set queue depth (nvme0n4) 00:16:28.320 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:28.320 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:28.320 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:28.320 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:28.320 fio-3.35 00:16:28.320 Starting 4 threads 00:16:29.715 00:16:29.715 job0: (groupid=0, jobs=1): err= 0: pid=2532657: Fri Jul 12 14:21:21 2024 00:16:29.715 read: IOPS=2050, BW=8204KiB/s (8401kB/s)(8212KiB/1001msec) 00:16:29.715 slat (nsec): min=7382, max=39741, avg=8177.75, stdev=1156.26 00:16:29.715 clat (usec): min=185, max=508, avg=242.02, stdev=16.61 00:16:29.715 lat (usec): min=195, max=516, avg=250.20, stdev=16.55 00:16:29.715 clat percentiles (usec): 00:16:29.715 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:16:29.715 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:16:29.715 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 265], 00:16:29.715 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 392], 99.95th=[ 445], 00:16:29.715 | 99.99th=[ 510] 00:16:29.715 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:29.715 slat (nsec): min=10319, max=35647, avg=11728.18, stdev=1536.03 00:16:29.715 clat (usec): min=132, max=265, avg=172.61, stdev=19.43 00:16:29.715 lat (usec): min=143, max=300, avg=184.34, stdev=19.65 00:16:29.715 clat percentiles (usec): 00:16:29.715 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:16:29.715 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:16:29.715 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 200], 95.00th=[ 212], 00:16:29.715 | 99.00th=[ 233], 99.50th=[ 237], 99.90th=[ 251], 99.95th=[ 265], 00:16:29.715 | 99.99th=[ 265] 00:16:29.715 bw ( KiB/s): min=10136, max=10136, per=42.18%, avg=10136.00, stdev= 0.00, samples=1 00:16:29.715 iops : min= 2534, max= 2534, avg=2534.00, stdev= 0.00, samples=1 00:16:29.715 lat (usec) : 250=86.47%, 500=13.51%, 750=0.02% 00:16:29.715 cpu : usr=4.00%, sys=7.40%, ctx=4614, majf=0, minf=2 00:16:29.715 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:29.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.715 issued rwts: total=2053,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.715 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:29.715 job1: (groupid=0, jobs=1): err= 0: pid=2532664: Fri Jul 12 14:21:21 2024 00:16:29.715 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:29.715 slat (nsec): min=6313, max=24189, avg=7070.61, stdev=1125.61 00:16:29.715 clat (usec): min=215, max=1290, avg=254.95, stdev=39.27 00:16:29.715 lat (usec): min=222, max=1297, avg=262.02, stdev=39.43 00:16:29.715 clat percentiles (usec): 00:16:29.715 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 241], 00:16:29.715 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:16:29.715 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 277], 00:16:29.715 | 99.00th=[ 400], 99.50th=[ 445], 99.90th=[ 660], 99.95th=[ 1004], 00:16:29.715 | 99.99th=[ 1287] 00:16:29.715 write: IOPS=2492, BW=9970KiB/s (10.2MB/s)(9980KiB/1001msec); 0 zone resets 00:16:29.715 slat (nsec): min=8937, max=40534, avg=10510.98, stdev=3111.76 00:16:29.715 clat (usec): min=130, max=1310, avg=170.74, stdev=33.72 00:16:29.715 lat (usec): min=140, max=1321, avg=181.25, stdev=34.04 00:16:29.715 clat percentiles (usec): 00:16:29.715 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:16:29.715 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 172], 00:16:29.715 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 212], 00:16:29.715 | 99.00th=[ 251], 99.50th=[ 269], 99.90th=[ 285], 99.95th=[ 367], 00:16:29.715 | 99.99th=[ 1303] 00:16:29.715 bw ( KiB/s): min= 9872, max= 9872, per=41.09%, avg=9872.00, stdev= 0.00, samples=1 00:16:29.715 iops : min= 2468, max= 2468, avg=2468.00, stdev= 0.00, samples=1 00:16:29.715 lat (usec) : 250=75.70%, 500=24.13%, 750=0.11% 00:16:29.715 lat (msec) : 2=0.07% 00:16:29.715 cpu : usr=2.60%, sys=3.90%, ctx=4546, majf=0, minf=1 00:16:29.715 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:29.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.715 issued rwts: total=2048,2495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.715 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:29.715 job2: (groupid=0, jobs=1): err= 0: pid=2532683: Fri Jul 12 14:21:21 2024 00:16:29.715 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:16:29.715 slat (nsec): min=9882, max=23537, avg=21985.91, stdev=2727.01 00:16:29.715 clat (usec): min=37988, max=41295, avg=40848.43, stdev=644.46 00:16:29.715 lat (usec): min=38011, max=41305, avg=40870.42, stdev=644.06 00:16:29.715 clat percentiles (usec): 00:16:29.715 | 1.00th=[38011], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:29.715 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:29.715 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:29.715 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:29.715 | 99.99th=[41157] 00:16:29.715 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:16:29.715 slat (nsec): min=4397, max=63672, avg=10026.62, stdev=3480.12 00:16:29.715 clat (usec): min=148, max=1315, avg=191.42, stdev=55.19 00:16:29.715 lat (usec): min=157, max=1325, avg=201.45, stdev=55.81 00:16:29.715 clat percentiles (usec): 00:16:29.715 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 172], 00:16:29.715 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:16:29.715 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 225], 95.00th=[ 237], 00:16:29.715 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 1319], 99.95th=[ 1319], 00:16:29.715 | 99.99th=[ 1319] 00:16:29.715 bw ( KiB/s): min= 4096, max= 4096, per=17.05%, avg=4096.00, stdev= 0.00, samples=1 00:16:29.715 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:29.715 lat (usec) : 250=94.01%, 500=1.69% 00:16:29.715 lat (msec) : 2=0.19%, 50=4.12% 00:16:29.715 cpu : usr=0.20%, sys=0.50%, ctx=535, majf=0, minf=1 00:16:29.715 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:29.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.715 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.715 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:29.715 job3: (groupid=0, jobs=1): err= 0: pid=2532689: Fri Jul 12 14:21:21 2024 00:16:29.715 read: IOPS=22, BW=90.9KiB/s (93.1kB/s)(92.0KiB/1012msec) 00:16:29.715 slat (nsec): min=8636, max=13757, avg=9770.87, stdev=984.75 00:16:29.715 clat (usec): min=281, max=41984, avg=39275.75, stdev=8506.22 00:16:29.715 lat (usec): min=291, max=41995, avg=39285.52, stdev=8506.21 00:16:29.715 clat percentiles (usec): 00:16:29.715 | 1.00th=[ 281], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:29.715 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:29.715 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:16:29.715 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:29.715 | 99.99th=[42206] 00:16:29.715 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:16:29.715 slat (nsec): min=10826, max=35001, avg=12587.58, stdev=1897.87 00:16:29.715 clat (usec): min=143, max=314, avg=193.08, stdev=26.17 00:16:29.715 lat (usec): min=155, max=332, avg=205.66, stdev=26.43 00:16:29.715 clat percentiles (usec): 00:16:29.715 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 174], 00:16:29.715 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:16:29.715 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 233], 95.00th=[ 241], 00:16:29.716 | 99.00th=[ 285], 99.50th=[ 314], 99.90th=[ 314], 99.95th=[ 314], 00:16:29.716 | 99.99th=[ 314] 00:16:29.716 bw ( KiB/s): min= 4096, max= 4096, per=17.05%, avg=4096.00, stdev= 0.00, samples=1 00:16:29.716 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:29.716 lat (usec) : 250=92.90%, 500=2.99% 00:16:29.716 lat (msec) : 50=4.11% 00:16:29.716 cpu : usr=0.30%, sys=1.09%, ctx=537, majf=0, minf=1 00:16:29.716 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:29.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.716 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.716 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:29.716 00:16:29.716 Run status group 0 (all jobs): 00:16:29.716 READ: bw=16.0MiB/s (16.8MB/s), 87.6KiB/s-8204KiB/s (89.8kB/s-8401kB/s), io=16.2MiB (17.0MB), run=1001-1012msec 00:16:29.716 WRITE: bw=23.5MiB/s (24.6MB/s), 2024KiB/s-9.99MiB/s (2072kB/s-10.5MB/s), io=23.7MiB (24.9MB), run=1001-1012msec 00:16:29.716 00:16:29.716 Disk stats (read/write): 00:16:29.716 nvme0n1: ios=1889/2048, merge=0/0, ticks=616/333, in_queue=949, util=97.39% 00:16:29.716 nvme0n2: ios=1823/2048, merge=0/0, ticks=1387/330, in_queue=1717, util=95.13% 00:16:29.716 nvme0n3: ios=75/512, merge=0/0, ticks=816/99, in_queue=915, util=90.74% 00:16:29.716 nvme0n4: ios=42/512, merge=0/0, ticks=1684/99, in_queue=1783, util=97.27% 00:16:29.716 14:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:29.716 [global] 00:16:29.716 thread=1 00:16:29.716 invalidate=1 00:16:29.716 rw=write 00:16:29.716 time_based=1 00:16:29.716 runtime=1 00:16:29.716 ioengine=libaio 00:16:29.716 direct=1 00:16:29.716 bs=4096 00:16:29.716 iodepth=128 00:16:29.716 norandommap=0 00:16:29.716 numjobs=1 00:16:29.716 00:16:29.716 verify_dump=1 00:16:29.716 verify_backlog=512 00:16:29.716 verify_state_save=0 00:16:29.716 do_verify=1 00:16:29.716 verify=crc32c-intel 00:16:29.716 [job0] 00:16:29.716 filename=/dev/nvme0n1 00:16:29.716 [job1] 00:16:29.716 filename=/dev/nvme0n2 00:16:29.716 [job2] 00:16:29.716 filename=/dev/nvme0n3 00:16:29.716 [job3] 00:16:29.716 filename=/dev/nvme0n4 00:16:29.716 Could not set queue depth (nvme0n1) 00:16:29.716 Could not set queue depth (nvme0n2) 00:16:29.716 Could not set queue depth (nvme0n3) 00:16:29.716 Could not set queue depth (nvme0n4) 00:16:29.979 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:29.979 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:29.979 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:29.979 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:29.979 fio-3.35 00:16:29.979 Starting 4 threads 00:16:31.355 00:16:31.355 job0: (groupid=0, jobs=1): err= 0: pid=2533100: Fri Jul 12 14:21:23 2024 00:16:31.355 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:16:31.355 slat (nsec): min=1238, max=12728k, avg=111535.62, stdev=769932.64 00:16:31.355 clat (usec): min=4193, max=37510, avg=12794.47, stdev=4505.35 00:16:31.355 lat (usec): min=4198, max=37513, avg=12906.01, stdev=4564.81 00:16:31.355 clat percentiles (usec): 00:16:31.355 | 1.00th=[ 5932], 5.00th=[ 8356], 10.00th=[ 9765], 20.00th=[10159], 00:16:31.355 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11994], 00:16:31.355 | 70.00th=[12911], 80.00th=[14091], 90.00th=[17433], 95.00th=[22676], 00:16:31.355 | 99.00th=[31851], 99.50th=[33424], 99.90th=[37487], 99.95th=[37487], 00:16:31.355 | 99.99th=[37487] 00:16:31.355 write: IOPS=4231, BW=16.5MiB/s (17.3MB/s)(16.7MiB/1011msec); 0 zone resets 00:16:31.355 slat (usec): min=2, max=8288, avg=121.90, stdev=531.95 00:16:31.355 clat (usec): min=1569, max=37511, avg=17728.07, stdev=8402.03 00:16:31.355 lat (usec): min=1601, max=37521, avg=17849.97, stdev=8460.86 00:16:31.355 clat percentiles (usec): 00:16:31.355 | 1.00th=[ 3195], 5.00th=[ 6194], 10.00th=[ 7963], 20.00th=[ 9765], 00:16:31.355 | 30.00th=[10290], 40.00th=[14615], 50.00th=[18482], 60.00th=[20317], 00:16:31.355 | 70.00th=[22938], 80.00th=[24511], 90.00th=[29230], 95.00th=[32900], 00:16:31.355 | 99.00th=[36439], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:16:31.355 | 99.99th=[37487] 00:16:31.355 bw ( KiB/s): min=13704, max=19504, per=23.80%, avg=16604.00, stdev=4101.22, samples=2 00:16:31.355 iops : min= 3426, max= 4876, avg=4151.00, stdev=1025.30, samples=2 00:16:31.355 lat (msec) : 2=0.10%, 4=0.88%, 10=19.21%, 20=54.38%, 50=25.42% 00:16:31.355 cpu : usr=2.97%, sys=4.55%, ctx=543, majf=0, minf=1 00:16:31.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:31.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:31.355 issued rwts: total=4096,4278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:31.355 job1: (groupid=0, jobs=1): err= 0: pid=2533116: Fri Jul 12 14:21:23 2024 00:16:31.355 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:16:31.355 slat (nsec): min=1077, max=22454k, avg=163909.93, stdev=1072600.47 00:16:31.355 clat (usec): min=10748, max=56244, avg=21069.99, stdev=9362.87 00:16:31.355 lat (usec): min=10753, max=56268, avg=21233.90, stdev=9423.49 00:16:31.355 clat percentiles (usec): 00:16:31.355 | 1.00th=[10945], 5.00th=[14091], 10.00th=[15008], 20.00th=[15926], 00:16:31.355 | 30.00th=[16188], 40.00th=[16712], 50.00th=[17433], 60.00th=[17695], 00:16:31.355 | 70.00th=[18482], 80.00th=[24249], 90.00th=[34341], 95.00th=[49021], 00:16:31.355 | 99.00th=[51119], 99.50th=[51119], 99.90th=[55313], 99.95th=[55837], 00:16:31.355 | 99.99th=[56361] 00:16:31.355 write: IOPS=2679, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1007msec); 0 zone resets 00:16:31.355 slat (nsec): min=1896, max=53197k, avg=209811.13, stdev=1627814.32 00:16:31.355 clat (usec): min=4710, max=66807, avg=22654.25, stdev=12869.39 00:16:31.355 lat (msec): min=6, max=107, avg=22.86, stdev=13.03 00:16:31.355 clat percentiles (usec): 00:16:31.355 | 1.00th=[ 7701], 5.00th=[12256], 10.00th=[13304], 20.00th=[13698], 00:16:31.355 | 30.00th=[14091], 40.00th=[15533], 50.00th=[18220], 60.00th=[19530], 00:16:31.355 | 70.00th=[22152], 80.00th=[29230], 90.00th=[45351], 95.00th=[53216], 00:16:31.355 | 99.00th=[61604], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:16:31.355 | 99.99th=[66847] 00:16:31.355 bw ( KiB/s): min=10000, max=10600, per=14.76%, avg=10300.00, stdev=424.26, samples=2 00:16:31.355 iops : min= 2500, max= 2650, avg=2575.00, stdev=106.07, samples=2 00:16:31.355 lat (msec) : 10=0.84%, 20=66.74%, 50=26.76%, 100=5.67% 00:16:31.355 cpu : usr=1.79%, sys=2.58%, ctx=271, majf=0, minf=1 00:16:31.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:31.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:31.355 issued rwts: total=2560,2698,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:31.355 job2: (groupid=0, jobs=1): err= 0: pid=2533137: Fri Jul 12 14:21:23 2024 00:16:31.355 read: IOPS=4975, BW=19.4MiB/s (20.4MB/s)(20.2MiB/1042msec) 00:16:31.355 slat (nsec): min=1271, max=7075.2k, avg=91136.28, stdev=524715.20 00:16:31.355 clat (usec): min=4186, max=50850, avg=11825.03, stdev=4332.66 00:16:31.355 lat (usec): min=4191, max=50854, avg=11916.16, stdev=4356.55 00:16:31.355 clat percentiles (usec): 00:16:31.355 | 1.00th=[ 7373], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[10814], 00:16:31.355 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:16:31.355 | 70.00th=[11600], 80.00th=[11863], 90.00th=[13435], 95.00th=[14746], 00:16:31.355 | 99.00th=[46924], 99.50th=[49021], 99.90th=[50594], 99.95th=[50594], 00:16:31.355 | 99.99th=[50594] 00:16:31.355 write: IOPS=5404, BW=21.1MiB/s (22.1MB/s)(22.0MiB/1042msec); 0 zone resets 00:16:31.355 slat (usec): min=2, max=20984, avg=88.49, stdev=532.84 00:16:31.355 clat (usec): min=1021, max=56583, avg=12591.34, stdev=6406.80 00:16:31.355 lat (usec): min=1171, max=56586, avg=12679.83, stdev=6426.91 00:16:31.355 clat percentiles (usec): 00:16:31.355 | 1.00th=[ 6325], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[10683], 00:16:31.355 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:16:31.355 | 70.00th=[11600], 80.00th=[11863], 90.00th=[14615], 95.00th=[22676], 00:16:31.355 | 99.00th=[51643], 99.50th=[54264], 99.90th=[55837], 99.95th=[56361], 00:16:31.355 | 99.99th=[56361] 00:16:31.355 bw ( KiB/s): min=20920, max=23632, per=31.93%, avg=22276.00, stdev=1917.67, samples=2 00:16:31.355 iops : min= 5230, max= 5908, avg=5569.00, stdev=479.42, samples=2 00:16:31.355 lat (msec) : 2=0.03%, 4=0.20%, 10=10.79%, 20=85.16%, 50=3.11% 00:16:31.355 lat (msec) : 100=0.71% 00:16:31.355 cpu : usr=3.07%, sys=5.67%, ctx=656, majf=0, minf=1 00:16:31.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:31.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:31.355 issued rwts: total=5184,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:31.355 job3: (groupid=0, jobs=1): err= 0: pid=2533140: Fri Jul 12 14:21:23 2024 00:16:31.355 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:16:31.355 slat (nsec): min=1168, max=10300k, avg=99699.21, stdev=725789.99 00:16:31.355 clat (usec): min=3404, max=25756, avg=12412.48, stdev=3342.22 00:16:31.355 lat (usec): min=3406, max=25760, avg=12512.18, stdev=3389.73 00:16:31.355 clat percentiles (usec): 00:16:31.355 | 1.00th=[ 4752], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10814], 00:16:31.355 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:16:31.355 | 70.00th=[12649], 80.00th=[14222], 90.00th=[17171], 95.00th=[19268], 00:16:31.355 | 99.00th=[24249], 99.50th=[24773], 99.90th=[25822], 99.95th=[25822], 00:16:31.355 | 99.99th=[25822] 00:16:31.355 write: IOPS=5510, BW=21.5MiB/s (22.6MB/s)(21.7MiB/1010msec); 0 zone resets 00:16:31.355 slat (usec): min=2, max=10309, avg=79.45, stdev=487.16 00:16:31.355 clat (usec): min=1766, max=27976, avg=11511.08, stdev=4284.75 00:16:31.355 lat (usec): min=1777, max=27979, avg=11590.53, stdev=4315.26 00:16:31.355 clat percentiles (usec): 00:16:31.355 | 1.00th=[ 2966], 5.00th=[ 4555], 10.00th=[ 6783], 20.00th=[ 8717], 00:16:31.355 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11338], 60.00th=[11600], 00:16:31.355 | 70.00th=[11863], 80.00th=[12780], 90.00th=[18220], 95.00th=[20579], 00:16:31.355 | 99.00th=[27132], 99.50th=[27657], 99.90th=[27919], 99.95th=[27919], 00:16:31.355 | 99.99th=[27919] 00:16:31.355 bw ( KiB/s): min=21120, max=22392, per=31.18%, avg=21756.00, stdev=899.44, samples=2 00:16:31.355 iops : min= 5280, max= 5598, avg=5439.00, stdev=224.86, samples=2 00:16:31.355 lat (msec) : 2=0.15%, 4=1.40%, 10=20.51%, 20=73.25%, 50=4.68% 00:16:31.355 cpu : usr=4.26%, sys=4.16%, ctx=546, majf=0, minf=1 00:16:31.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:31.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:31.355 issued rwts: total=5120,5566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:31.355 00:16:31.355 Run status group 0 (all jobs): 00:16:31.355 READ: bw=63.6MiB/s (66.7MB/s), 9.93MiB/s-19.8MiB/s (10.4MB/s-20.8MB/s), io=66.2MiB (69.5MB), run=1007-1042msec 00:16:31.355 WRITE: bw=68.1MiB/s (71.4MB/s), 10.5MiB/s-21.5MiB/s (11.0MB/s-22.6MB/s), io=71.0MiB (74.4MB), run=1007-1042msec 00:16:31.355 00:16:31.355 Disk stats (read/write): 00:16:31.356 nvme0n1: ios=3634/3639, merge=0/0, ticks=43849/60388, in_queue=104237, util=86.97% 00:16:31.356 nvme0n2: ios=2071/2287, merge=0/0, ticks=14486/16390, in_queue=30876, util=95.43% 00:16:31.356 nvme0n3: ios=4569/4608, merge=0/0, ticks=28512/30405, in_queue=58917, util=94.69% 00:16:31.356 nvme0n4: ios=4357/4608, merge=0/0, ticks=51394/51744, in_queue=103138, util=96.44% 00:16:31.356 14:21:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:31.356 [global] 00:16:31.356 thread=1 00:16:31.356 invalidate=1 00:16:31.356 rw=randwrite 00:16:31.356 time_based=1 00:16:31.356 runtime=1 00:16:31.356 ioengine=libaio 00:16:31.356 direct=1 00:16:31.356 bs=4096 00:16:31.356 iodepth=128 00:16:31.356 norandommap=0 00:16:31.356 numjobs=1 00:16:31.356 00:16:31.356 verify_dump=1 00:16:31.356 verify_backlog=512 00:16:31.356 verify_state_save=0 00:16:31.356 do_verify=1 00:16:31.356 verify=crc32c-intel 00:16:31.356 [job0] 00:16:31.356 filename=/dev/nvme0n1 00:16:31.356 [job1] 00:16:31.356 filename=/dev/nvme0n2 00:16:31.356 [job2] 00:16:31.356 filename=/dev/nvme0n3 00:16:31.356 [job3] 00:16:31.356 filename=/dev/nvme0n4 00:16:31.356 Could not set queue depth (nvme0n1) 00:16:31.356 Could not set queue depth (nvme0n2) 00:16:31.356 Could not set queue depth (nvme0n3) 00:16:31.356 Could not set queue depth (nvme0n4) 00:16:31.613 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:31.613 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:31.613 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:31.613 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:31.613 fio-3.35 00:16:31.613 Starting 4 threads 00:16:32.991 00:16:32.991 job0: (groupid=0, jobs=1): err= 0: pid=2533559: Fri Jul 12 14:21:24 2024 00:16:32.991 read: IOPS=4149, BW=16.2MiB/s (17.0MB/s)(16.4MiB/1009msec) 00:16:32.991 slat (nsec): min=1298, max=12609k, avg=112100.48, stdev=892084.76 00:16:32.991 clat (usec): min=3021, max=35806, avg=14146.12, stdev=5501.47 00:16:32.991 lat (usec): min=3025, max=35832, avg=14258.22, stdev=5576.69 00:16:32.991 clat percentiles (usec): 00:16:32.991 | 1.00th=[ 5145], 5.00th=[ 7504], 10.00th=[ 7767], 20.00th=[ 8455], 00:16:32.991 | 30.00th=[ 9372], 40.00th=[12387], 50.00th=[14091], 60.00th=[15795], 00:16:32.991 | 70.00th=[16909], 80.00th=[17695], 90.00th=[22676], 95.00th=[24511], 00:16:32.991 | 99.00th=[27657], 99.50th=[33424], 99.90th=[34341], 99.95th=[35390], 00:16:32.991 | 99.99th=[35914] 00:16:32.991 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:16:32.991 slat (usec): min=2, max=40245, avg=110.25, stdev=1030.77 00:16:32.991 clat (usec): min=1888, max=57629, avg=14874.64, stdev=9354.49 00:16:32.991 lat (usec): min=1896, max=57642, avg=14984.88, stdev=9429.16 00:16:32.991 clat percentiles (usec): 00:16:32.991 | 1.00th=[ 3163], 5.00th=[ 5800], 10.00th=[ 7308], 20.00th=[ 8094], 00:16:32.991 | 30.00th=[ 8356], 40.00th=[ 9110], 50.00th=[13173], 60.00th=[16057], 00:16:32.991 | 70.00th=[17433], 80.00th=[19006], 90.00th=[24773], 95.00th=[35390], 00:16:32.991 | 99.00th=[51643], 99.50th=[54789], 99.90th=[56886], 99.95th=[57410], 00:16:32.991 | 99.99th=[57410] 00:16:32.991 bw ( KiB/s): min=14128, max=22440, per=27.99%, avg=18284.00, stdev=5877.47, samples=2 00:16:32.991 iops : min= 3532, max= 5610, avg=4571.00, stdev=1469.37, samples=2 00:16:32.991 lat (msec) : 2=0.07%, 4=1.23%, 10=35.11%, 20=48.29%, 50=14.61% 00:16:32.991 lat (msec) : 100=0.69% 00:16:32.991 cpu : usr=3.27%, sys=4.86%, ctx=422, majf=0, minf=1 00:16:32.991 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:32.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:32.991 issued rwts: total=4187,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:32.991 job1: (groupid=0, jobs=1): err= 0: pid=2533575: Fri Jul 12 14:21:24 2024 00:16:32.991 read: IOPS=3709, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1003msec) 00:16:32.991 slat (nsec): min=1311, max=74805k, avg=109709.14, stdev=1476153.59 00:16:32.991 clat (usec): min=728, max=135973, avg=16317.36, stdev=18746.28 00:16:32.991 lat (usec): min=1909, max=135981, avg=16427.07, stdev=18870.30 00:16:32.991 clat percentiles (msec): 00:16:32.991 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:16:32.991 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 11], 00:16:32.991 | 70.00th=[ 12], 80.00th=[ 16], 90.00th=[ 31], 95.00th=[ 62], 00:16:32.991 | 99.00th=[ 104], 99.50th=[ 114], 99.90th=[ 114], 99.95th=[ 136], 00:16:32.991 | 99.99th=[ 136] 00:16:32.991 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:16:32.991 slat (usec): min=2, max=20648, avg=128.58, stdev=940.34 00:16:32.991 clat (usec): min=868, max=79246, avg=16164.34, stdev=13991.84 00:16:32.991 lat (usec): min=875, max=79261, avg=16292.92, stdev=14091.58 00:16:32.991 clat percentiles (usec): 00:16:32.991 | 1.00th=[ 3359], 5.00th=[ 5538], 10.00th=[ 7439], 20.00th=[ 8094], 00:16:32.991 | 30.00th=[ 8586], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10683], 00:16:32.991 | 70.00th=[16909], 80.00th=[19792], 90.00th=[35390], 95.00th=[46924], 00:16:32.991 | 99.00th=[71828], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168], 00:16:32.991 | 99.99th=[79168] 00:16:32.991 bw ( KiB/s): min=13264, max=19504, per=25.08%, avg=16384.00, stdev=4412.35, samples=2 00:16:32.991 iops : min= 3316, max= 4876, avg=4096.00, stdev=1103.09, samples=2 00:16:32.991 lat (usec) : 750=0.01%, 1000=0.14% 00:16:32.991 lat (msec) : 2=0.18%, 4=1.04%, 10=51.85%, 20=30.10%, 50=11.46% 00:16:32.991 lat (msec) : 100=4.38%, 250=0.84% 00:16:32.991 cpu : usr=2.69%, sys=4.89%, ctx=336, majf=0, minf=1 00:16:32.991 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:32.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:32.991 issued rwts: total=3721,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:32.991 job2: (groupid=0, jobs=1): err= 0: pid=2533595: Fri Jul 12 14:21:24 2024 00:16:32.991 read: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec) 00:16:32.991 slat (nsec): min=1156, max=21083k, avg=129921.32, stdev=1066276.16 00:16:32.991 clat (usec): min=3565, max=51677, avg=17198.80, stdev=7536.52 00:16:32.991 lat (usec): min=3571, max=51686, avg=17328.73, stdev=7625.28 00:16:32.991 clat percentiles (usec): 00:16:32.991 | 1.00th=[ 4621], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[ 9896], 00:16:32.991 | 30.00th=[11994], 40.00th=[15139], 50.00th=[16909], 60.00th=[17695], 00:16:32.991 | 70.00th=[19792], 80.00th=[21103], 90.00th=[27657], 95.00th=[28181], 00:16:32.991 | 99.00th=[47449], 99.50th=[49546], 99.90th=[51643], 99.95th=[51643], 00:16:32.991 | 99.99th=[51643] 00:16:32.991 write: IOPS=3718, BW=14.5MiB/s (15.2MB/s)(14.7MiB/1015msec); 0 zone resets 00:16:32.991 slat (usec): min=2, max=18531, avg=110.34, stdev=860.31 00:16:32.991 clat (usec): min=2859, max=84339, avg=17805.03, stdev=14690.20 00:16:32.991 lat (usec): min=2866, max=84348, avg=17915.37, stdev=14753.37 00:16:32.991 clat percentiles (usec): 00:16:32.991 | 1.00th=[ 3523], 5.00th=[ 6325], 10.00th=[ 7373], 20.00th=[ 9634], 00:16:32.991 | 30.00th=[11338], 40.00th=[12518], 50.00th=[14615], 60.00th=[17171], 00:16:32.991 | 70.00th=[17957], 80.00th=[19792], 90.00th=[22676], 95.00th=[58983], 00:16:32.991 | 99.00th=[78119], 99.50th=[81265], 99.90th=[84411], 99.95th=[84411], 00:16:32.991 | 99.99th=[84411] 00:16:32.991 bw ( KiB/s): min=12784, max=16384, per=22.33%, avg=14584.00, stdev=2545.58, samples=2 00:16:32.991 iops : min= 3196, max= 4096, avg=3646.00, stdev=636.40, samples=2 00:16:32.991 lat (msec) : 4=1.21%, 10=20.81%, 20=53.78%, 50=20.90%, 100=3.30% 00:16:32.991 cpu : usr=2.66%, sys=4.24%, ctx=332, majf=0, minf=1 00:16:32.991 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:32.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:32.991 issued rwts: total=3584,3774,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:32.991 job3: (groupid=0, jobs=1): err= 0: pid=2533600: Fri Jul 12 14:21:24 2024 00:16:32.991 read: IOPS=3677, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1009msec) 00:16:32.991 slat (nsec): min=1032, max=17475k, avg=115313.12, stdev=978290.73 00:16:32.991 clat (usec): min=2829, max=47879, avg=15589.55, stdev=6873.04 00:16:32.991 lat (usec): min=2835, max=47903, avg=15704.86, stdev=6943.78 00:16:32.991 clat percentiles (usec): 00:16:32.991 | 1.00th=[ 3589], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10290], 00:16:32.991 | 30.00th=[10814], 40.00th=[12125], 50.00th=[13173], 60.00th=[14877], 00:16:32.991 | 70.00th=[17957], 80.00th=[21103], 90.00th=[23987], 95.00th=[30278], 00:16:32.991 | 99.00th=[39060], 99.50th=[39584], 99.90th=[40633], 99.95th=[40633], 00:16:32.991 | 99.99th=[47973] 00:16:32.991 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:16:32.991 slat (nsec): min=1927, max=10472k, avg=113153.42, stdev=752063.07 00:16:32.991 clat (usec): min=839, max=80774, avg=17155.87, stdev=15353.51 00:16:32.991 lat (usec): min=846, max=80783, avg=17269.03, stdev=15450.04 00:16:32.991 clat percentiles (usec): 00:16:32.991 | 1.00th=[ 2573], 5.00th=[ 7767], 10.00th=[ 8291], 20.00th=[ 9634], 00:16:32.991 | 30.00th=[10290], 40.00th=[11207], 50.00th=[11600], 60.00th=[12911], 00:16:32.991 | 70.00th=[15926], 80.00th=[18220], 90.00th=[32375], 95.00th=[60031], 00:16:32.991 | 99.00th=[79168], 99.50th=[80217], 99.90th=[81265], 99.95th=[81265], 00:16:32.991 | 99.99th=[81265] 00:16:32.991 bw ( KiB/s): min=16136, max=16624, per=25.08%, avg=16380.00, stdev=345.07, samples=2 00:16:32.991 iops : min= 4034, max= 4156, avg=4095.00, stdev=86.27, samples=2 00:16:32.991 lat (usec) : 1000=0.19% 00:16:32.991 lat (msec) : 2=0.29%, 4=1.11%, 10=17.50%, 20=61.70%, 50=15.83% 00:16:32.991 lat (msec) : 100=3.37% 00:16:32.991 cpu : usr=2.58%, sys=3.57%, ctx=324, majf=0, minf=1 00:16:32.991 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:32.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:32.991 issued rwts: total=3711,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:32.991 00:16:32.991 Run status group 0 (all jobs): 00:16:32.991 READ: bw=58.5MiB/s (61.4MB/s), 13.8MiB/s-16.2MiB/s (14.5MB/s-17.0MB/s), io=59.4MiB (62.3MB), run=1003-1015msec 00:16:32.991 WRITE: bw=63.8MiB/s (66.9MB/s), 14.5MiB/s-17.8MiB/s (15.2MB/s-18.7MB/s), io=64.7MiB (67.9MB), run=1003-1015msec 00:16:32.991 00:16:32.991 Disk stats (read/write): 00:16:32.991 nvme0n1: ios=3621/3700, merge=0/0, ticks=50338/51686, in_queue=102024, util=97.80% 00:16:32.991 nvme0n2: ios=3121/3523, merge=0/0, ticks=40244/46351, in_queue=86595, util=91.07% 00:16:32.991 nvme0n3: ios=3072/3571, merge=0/0, ticks=48728/56122, in_queue=104850, util=88.98% 00:16:32.991 nvme0n4: ios=3125/3189, merge=0/0, ticks=45606/57680, in_queue=103286, util=90.88% 00:16:32.991 14:21:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:32.991 14:21:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2533704 00:16:32.991 14:21:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:32.991 14:21:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:32.991 [global] 00:16:32.991 thread=1 00:16:32.991 invalidate=1 00:16:32.991 rw=read 00:16:32.992 time_based=1 00:16:32.992 runtime=10 00:16:32.992 ioengine=libaio 00:16:32.992 direct=1 00:16:32.992 bs=4096 00:16:32.992 iodepth=1 00:16:32.992 norandommap=1 00:16:32.992 numjobs=1 00:16:32.992 00:16:32.992 [job0] 00:16:32.992 filename=/dev/nvme0n1 00:16:32.992 [job1] 00:16:32.992 filename=/dev/nvme0n2 00:16:32.992 [job2] 00:16:32.992 filename=/dev/nvme0n3 00:16:32.992 [job3] 00:16:32.992 filename=/dev/nvme0n4 00:16:32.992 Could not set queue depth (nvme0n1) 00:16:32.992 Could not set queue depth (nvme0n2) 00:16:32.992 Could not set queue depth (nvme0n3) 00:16:32.992 Could not set queue depth (nvme0n4) 00:16:32.992 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.992 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.992 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.992 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.992 fio-3.35 00:16:32.992 Starting 4 threads 00:16:35.614 14:21:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:35.871 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=15802368, buflen=4096 00:16:35.871 fio: pid=2533987, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:35.871 14:21:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:36.129 14:21:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:36.129 14:21:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:36.129 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=299008, buflen=4096 00:16:36.129 fio: pid=2533986, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:36.386 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=49299456, buflen=4096 00:16:36.386 fio: pid=2533983, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:36.386 14:21:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:36.386 14:21:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:36.645 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=40816640, buflen=4096 00:16:36.645 fio: pid=2533984, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:16:36.645 14:21:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:36.645 14:21:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:36.645 00:16:36.645 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2533983: Fri Jul 12 14:21:28 2024 00:16:36.645 read: IOPS=3886, BW=15.2MiB/s (15.9MB/s)(47.0MiB/3097msec) 00:16:36.645 slat (usec): min=5, max=17565, avg= 8.94, stdev=174.51 00:16:36.645 clat (usec): min=181, max=530, avg=245.52, stdev=18.26 00:16:36.645 lat (usec): min=188, max=17937, avg=254.46, stdev=177.09 00:16:36.645 clat percentiles (usec): 00:16:36.645 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 221], 20.00th=[ 237], 00:16:36.645 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 251], 00:16:36.645 | 70.00th=[ 255], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:16:36.645 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 371], 99.95th=[ 437], 00:16:36.645 | 99.99th=[ 494] 00:16:36.645 bw ( KiB/s): min=15496, max=15520, per=49.44%, avg=15507.20, stdev= 9.12, samples=5 00:16:36.645 iops : min= 3874, max= 3880, avg=3876.80, stdev= 2.28, samples=5 00:16:36.645 lat (usec) : 250=55.64%, 500=44.35%, 750=0.01% 00:16:36.645 cpu : usr=1.00%, sys=3.33%, ctx=12041, majf=0, minf=1 00:16:36.645 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:36.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.645 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.645 issued rwts: total=12037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:36.645 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2533984: Fri Jul 12 14:21:28 2024 00:16:36.645 read: IOPS=3013, BW=11.8MiB/s (12.3MB/s)(38.9MiB/3307msec) 00:16:36.645 slat (usec): min=6, max=11840, avg=12.59, stdev=220.48 00:16:36.645 clat (usec): min=188, max=43104, avg=315.34, stdev=1320.62 00:16:36.645 lat (usec): min=205, max=52983, avg=327.30, stdev=1405.20 00:16:36.645 clat percentiles (usec): 00:16:36.645 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 241], 00:16:36.645 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 262], 00:16:36.645 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 318], 95.00th=[ 433], 00:16:36.645 | 99.00th=[ 453], 99.50th=[ 465], 99.90th=[40633], 99.95th=[41157], 00:16:36.645 | 99.99th=[43254] 00:16:36.645 bw ( KiB/s): min= 7357, max=15504, per=41.53%, avg=13026.17, stdev=3232.13, samples=6 00:16:36.645 iops : min= 1839, max= 3876, avg=3256.50, stdev=808.12, samples=6 00:16:36.645 lat (usec) : 250=41.23%, 500=58.53%, 750=0.06%, 1000=0.02% 00:16:36.645 lat (msec) : 2=0.04%, 50=0.11% 00:16:36.645 cpu : usr=1.72%, sys=4.78%, ctx=9972, majf=0, minf=1 00:16:36.645 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:36.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.645 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.645 issued rwts: total=9966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:36.645 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2533986: Fri Jul 12 14:21:28 2024 00:16:36.645 read: IOPS=25, BW=99.2KiB/s (102kB/s)(292KiB/2943msec) 00:16:36.645 slat (usec): min=12, max=3851, avg=73.74, stdev=445.21 00:16:36.645 clat (usec): min=529, max=41930, avg=39889.70, stdev=6594.42 00:16:36.645 lat (usec): min=560, max=44909, avg=39964.10, stdev=6618.37 00:16:36.645 clat percentiles (usec): 00:16:36.645 | 1.00th=[ 529], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:36.645 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:36.645 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:36.645 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:36.645 | 99.99th=[41681] 00:16:36.645 bw ( KiB/s): min= 96, max= 104, per=0.32%, avg=99.20, stdev= 4.38, samples=5 00:16:36.645 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:16:36.645 lat (usec) : 750=1.35% 00:16:36.645 lat (msec) : 2=1.35%, 50=95.95% 00:16:36.645 cpu : usr=0.14%, sys=0.00%, ctx=75, majf=0, minf=1 00:16:36.645 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:36.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.645 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.645 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:36.645 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2533987: Fri Jul 12 14:21:28 2024 00:16:36.645 read: IOPS=1420, BW=5680KiB/s (5816kB/s)(15.1MiB/2717msec) 00:16:36.645 slat (nsec): min=6210, max=69931, avg=8305.59, stdev=2113.73 00:16:36.645 clat (usec): min=195, max=41976, avg=688.57, stdev=4220.36 00:16:36.645 lat (usec): min=203, max=41994, avg=696.87, stdev=4221.51 00:16:36.645 clat percentiles (usec): 00:16:36.645 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:16:36.645 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:16:36.645 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 273], 00:16:36.645 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:36.645 | 99.99th=[42206] 00:16:36.645 bw ( KiB/s): min= 104, max=15696, per=19.65%, avg=6164.80, stdev=7577.98, samples=5 00:16:36.645 iops : min= 26, max= 3924, avg=1541.20, stdev=1894.50, samples=5 00:16:36.645 lat (usec) : 250=63.90%, 500=34.96% 00:16:36.645 lat (msec) : 2=0.03%, 50=1.09% 00:16:36.645 cpu : usr=0.70%, sys=2.03%, ctx=3862, majf=0, minf=2 00:16:36.645 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:36.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.645 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.645 issued rwts: total=3859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:36.645 00:16:36.645 Run status group 0 (all jobs): 00:16:36.645 READ: bw=30.6MiB/s (32.1MB/s), 99.2KiB/s-15.2MiB/s (102kB/s-15.9MB/s), io=101MiB (106MB), run=2717-3307msec 00:16:36.645 00:16:36.645 Disk stats (read/write): 00:16:36.645 nvme0n1: ios=11019/0, merge=0/0, ticks=2704/0, in_queue=2704, util=94.56% 00:16:36.645 nvme0n2: ios=9961/0, merge=0/0, ticks=2848/0, in_queue=2848, util=95.27% 00:16:36.645 nvme0n3: ios=71/0, merge=0/0, ticks=2832/0, in_queue=2832, util=96.42% 00:16:36.645 nvme0n4: ios=3905/0, merge=0/0, ticks=3716/0, in_queue=3716, util=99.52% 00:16:36.645 14:21:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:36.645 14:21:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:36.903 14:21:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:36.903 14:21:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:37.160 14:21:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:37.160 14:21:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:37.160 14:21:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:37.160 14:21:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:37.420 14:21:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:37.420 14:21:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2533704 00:16:37.420 14:21:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:37.420 14:21:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:37.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:37.680 nvmf hotplug test: fio failed as expected 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:37.680 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:37.680 rmmod nvme_tcp 00:16:37.940 rmmod nvme_fabrics 00:16:37.940 rmmod nvme_keyring 00:16:37.940 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:37.940 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:37.940 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:37.940 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2530918 ']' 00:16:37.940 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2530918 00:16:37.940 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2530918 ']' 00:16:37.940 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2530918 00:16:37.940 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:16:37.940 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:37.940 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2530918 00:16:37.940 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:37.940 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:37.940 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2530918' 00:16:37.940 killing process with pid 2530918 00:16:37.940 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2530918 00:16:37.940 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2530918 00:16:38.199 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:38.199 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:38.199 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:38.199 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:38.199 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:38.199 14:21:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.199 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.199 14:21:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.103 14:21:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:40.103 00:16:40.103 real 0m26.330s 00:16:40.103 user 1m45.935s 00:16:40.103 sys 0m8.061s 00:16:40.103 14:21:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:40.103 14:21:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.103 ************************************ 00:16:40.103 END TEST nvmf_fio_target 00:16:40.103 ************************************ 00:16:40.103 14:21:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:40.103 14:21:32 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:40.103 14:21:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:40.103 14:21:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.104 14:21:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:40.363 ************************************ 00:16:40.363 START TEST nvmf_bdevio 00:16:40.363 ************************************ 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:40.363 * Looking for test storage... 00:16:40.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:40.363 14:21:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.632 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:45.633 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:45.633 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:45.633 Found net devices under 0000:86:00.0: cvl_0_0 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:45.633 Found net devices under 0000:86:00.1: cvl_0_1 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:45.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:16:45.633 00:16:45.633 --- 10.0.0.2 ping statistics --- 00:16:45.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.633 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:45.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:16:45.633 00:16:45.633 --- 10.0.0.1 ping statistics --- 00:16:45.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.633 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2538146 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2538146 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2538146 ']' 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.633 14:21:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:45.633 [2024-07-12 14:21:37.469030] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:16:45.633 [2024-07-12 14:21:37.469088] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.633 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.633 [2024-07-12 14:21:37.529358] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.633 [2024-07-12 14:21:37.601475] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.633 [2024-07-12 14:21:37.601519] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.633 [2024-07-12 14:21:37.601526] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.633 [2024-07-12 14:21:37.601531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.633 [2024-07-12 14:21:37.601536] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.633 [2024-07-12 14:21:37.601664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:45.633 [2024-07-12 14:21:37.601752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:45.633 [2024-07-12 14:21:37.601837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:45.633 [2024-07-12 14:21:37.601838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:46.584 [2024-07-12 14:21:38.313215] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:46.584 Malloc0 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:46.584 [2024-07-12 14:21:38.364460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:46.584 { 00:16:46.584 "params": { 00:16:46.584 "name": "Nvme$subsystem", 00:16:46.584 "trtype": "$TEST_TRANSPORT", 00:16:46.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:46.584 "adrfam": "ipv4", 00:16:46.584 "trsvcid": "$NVMF_PORT", 00:16:46.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:46.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:46.584 "hdgst": ${hdgst:-false}, 00:16:46.584 "ddgst": ${ddgst:-false} 00:16:46.584 }, 00:16:46.584 "method": "bdev_nvme_attach_controller" 00:16:46.584 } 00:16:46.584 EOF 00:16:46.584 )") 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:46.584 14:21:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:46.584 "params": { 00:16:46.584 "name": "Nvme1", 00:16:46.584 "trtype": "tcp", 00:16:46.584 "traddr": "10.0.0.2", 00:16:46.584 "adrfam": "ipv4", 00:16:46.584 "trsvcid": "4420", 00:16:46.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:46.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:46.584 "hdgst": false, 00:16:46.584 "ddgst": false 00:16:46.584 }, 00:16:46.584 "method": "bdev_nvme_attach_controller" 00:16:46.584 }' 00:16:46.584 [2024-07-12 14:21:38.414795] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:16:46.584 [2024-07-12 14:21:38.414838] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2538255 ] 00:16:46.584 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.584 [2024-07-12 14:21:38.469216] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:46.584 [2024-07-12 14:21:38.544744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.584 [2024-07-12 14:21:38.544839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.584 [2024-07-12 14:21:38.544841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.843 I/O targets: 00:16:46.843 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:46.843 00:16:46.843 00:16:46.843 CUnit - A unit testing framework for C - Version 2.1-3 00:16:46.843 http://cunit.sourceforge.net/ 00:16:46.843 00:16:46.843 00:16:46.843 Suite: bdevio tests on: Nvme1n1 00:16:47.101 Test: blockdev write read block ...passed 00:16:47.101 Test: blockdev write zeroes read block ...passed 00:16:47.101 Test: blockdev write zeroes read no split ...passed 00:16:47.101 Test: blockdev write zeroes read split ...passed 00:16:47.101 Test: blockdev write zeroes read split partial ...passed 00:16:47.101 Test: blockdev reset ...[2024-07-12 14:21:39.018089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:47.101 [2024-07-12 14:21:39.018154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6d0 (9): Bad file descriptor 00:16:47.101 [2024-07-12 14:21:39.032630] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:47.101 passed 00:16:47.101 Test: blockdev write read 8 blocks ...passed 00:16:47.101 Test: blockdev write read size > 128k ...passed 00:16:47.101 Test: blockdev write read invalid size ...passed 00:16:47.361 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:47.361 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:47.361 Test: blockdev write read max offset ...passed 00:16:47.361 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:47.361 Test: blockdev writev readv 8 blocks ...passed 00:16:47.361 Test: blockdev writev readv 30 x 1block ...passed 00:16:47.361 Test: blockdev writev readv block ...passed 00:16:47.361 Test: blockdev writev readv size > 128k ...passed 00:16:47.361 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:47.361 Test: blockdev comparev and writev ...[2024-07-12 14:21:39.246099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.361 [2024-07-12 14:21:39.246128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.361 [2024-07-12 14:21:39.246141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.361 [2024-07-12 14:21:39.246149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:47.361 [2024-07-12 14:21:39.246393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.361 [2024-07-12 14:21:39.246403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:47.361 [2024-07-12 14:21:39.246415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.361 [2024-07-12 14:21:39.246422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:47.361 [2024-07-12 14:21:39.246673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.361 [2024-07-12 14:21:39.246682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:47.361 [2024-07-12 14:21:39.246694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.361 [2024-07-12 14:21:39.246701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:47.361 [2024-07-12 14:21:39.246929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.361 [2024-07-12 14:21:39.246938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:47.361 [2024-07-12 14:21:39.246949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.361 [2024-07-12 14:21:39.246956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:47.361 passed 00:16:47.361 Test: blockdev nvme passthru rw ...passed 00:16:47.361 Test: blockdev nvme passthru vendor specific ...[2024-07-12 14:21:39.330744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.361 [2024-07-12 14:21:39.330758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:47.361 [2024-07-12 14:21:39.330872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.361 [2024-07-12 14:21:39.330881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:47.361 [2024-07-12 14:21:39.330990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.361 [2024-07-12 14:21:39.330999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:47.361 [2024-07-12 14:21:39.331112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.361 [2024-07-12 14:21:39.331125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:47.361 passed 00:16:47.361 Test: blockdev nvme admin passthru ...passed 00:16:47.619 Test: blockdev copy ...passed 00:16:47.619 00:16:47.619 Run Summary: Type Total Ran Passed Failed Inactive 00:16:47.619 suites 1 1 n/a 0 0 00:16:47.619 tests 23 23 23 0 0 00:16:47.619 asserts 152 152 152 0 n/a 00:16:47.619 00:16:47.619 Elapsed time = 1.138 seconds 00:16:47.619 14:21:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.619 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.619 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:47.619 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.619 14:21:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:47.619 14:21:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:47.619 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:47.619 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:47.619 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:47.619 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:47.619 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:47.619 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:47.619 rmmod nvme_tcp 00:16:47.619 rmmod nvme_fabrics 00:16:47.619 rmmod nvme_keyring 00:16:47.878 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:47.878 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:47.878 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:47.878 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2538146 ']' 00:16:47.878 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2538146 00:16:47.878 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2538146 ']' 00:16:47.878 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2538146 00:16:47.878 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:16:47.878 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.878 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2538146 00:16:47.878 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:47.878 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:47.878 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2538146' 00:16:47.878 killing process with pid 2538146 00:16:47.878 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2538146 00:16:47.878 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2538146 00:16:48.137 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:48.137 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:48.137 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:48.137 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.137 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:48.137 14:21:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.137 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.137 14:21:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.044 14:21:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:50.044 00:16:50.044 real 0m9.838s 00:16:50.044 user 0m12.974s 00:16:50.044 sys 0m4.362s 00:16:50.044 14:21:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:50.044 14:21:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:50.044 ************************************ 00:16:50.044 END TEST nvmf_bdevio 00:16:50.044 ************************************ 00:16:50.044 14:21:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:50.044 14:21:41 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:50.044 14:21:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:50.044 14:21:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.044 14:21:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:50.044 ************************************ 00:16:50.044 START TEST nvmf_auth_target 00:16:50.044 ************************************ 00:16:50.044 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:50.303 * Looking for test storage... 00:16:50.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.303 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.303 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:50.303 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.303 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.303 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.303 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.303 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.303 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.303 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.303 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.303 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.303 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.303 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:50.304 14:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:55.578 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:55.578 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:55.578 Found net devices under 0000:86:00.0: cvl_0_0 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:55.578 Found net devices under 0000:86:00.1: cvl_0_1 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:55.578 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:55.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:16:55.579 00:16:55.579 --- 10.0.0.2 ping statistics --- 00:16:55.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.579 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:55.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:16:55.579 00:16:55.579 --- 10.0.0.1 ping statistics --- 00:16:55.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.579 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2541996 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2541996 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2541996 ']' 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.579 14:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2542030 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f54610df4b42e96eb4ccc0eb8d609e37e63c415297b87929 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fM8 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f54610df4b42e96eb4ccc0eb8d609e37e63c415297b87929 0 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f54610df4b42e96eb4ccc0eb8d609e37e63c415297b87929 0 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f54610df4b42e96eb4ccc0eb8d609e37e63c415297b87929 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fM8 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fM8 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.fM8 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7485a4a0da73931419cb45854473446b881edee548cc6ffe41dd292fbfe21eb1 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xpf 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7485a4a0da73931419cb45854473446b881edee548cc6ffe41dd292fbfe21eb1 3 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7485a4a0da73931419cb45854473446b881edee548cc6ffe41dd292fbfe21eb1 3 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7485a4a0da73931419cb45854473446b881edee548cc6ffe41dd292fbfe21eb1 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xpf 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xpf 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.xpf 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d3fc2f07b4022f5be297a43a65ce0cb1 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BqE 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d3fc2f07b4022f5be297a43a65ce0cb1 1 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d3fc2f07b4022f5be297a43a65ce0cb1 1 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d3fc2f07b4022f5be297a43a65ce0cb1 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:56.515 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BqE 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BqE 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.BqE 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a4143e69cc52c3303508b3058fc481abf53eb73dafeaf72a 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.u7L 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a4143e69cc52c3303508b3058fc481abf53eb73dafeaf72a 2 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a4143e69cc52c3303508b3058fc481abf53eb73dafeaf72a 2 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a4143e69cc52c3303508b3058fc481abf53eb73dafeaf72a 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.u7L 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.u7L 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.u7L 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=aec283b9292a84c60235bcc301db80da80ca4bdee674f888 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.yIR 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key aec283b9292a84c60235bcc301db80da80ca4bdee674f888 2 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 aec283b9292a84c60235bcc301db80da80ca4bdee674f888 2 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=aec283b9292a84c60235bcc301db80da80ca4bdee674f888 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.yIR 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.yIR 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.yIR 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2499af6738f698c24e3cb320e113ccf3 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BfD 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2499af6738f698c24e3cb320e113ccf3 1 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2499af6738f698c24e3cb320e113ccf3 1 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2499af6738f698c24e3cb320e113ccf3 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BfD 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BfD 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.BfD 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c1727449b2292be0f9bd9497314fef41a4c35ec5a3e3cd33810a91bba4f6aa4f 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.bPc 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c1727449b2292be0f9bd9497314fef41a4c35ec5a3e3cd33810a91bba4f6aa4f 3 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c1727449b2292be0f9bd9497314fef41a4c35ec5a3e3cd33810a91bba4f6aa4f 3 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c1727449b2292be0f9bd9497314fef41a4c35ec5a3e3cd33810a91bba4f6aa4f 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:56.774 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:57.033 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.bPc 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.bPc 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.bPc 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2541996 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2541996 ']' 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2542030 /var/tmp/host.sock 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2542030 ']' 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:57.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.034 14:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.293 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.293 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:57.293 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:57.293 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.293 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.293 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.293 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:57.293 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fM8 00:16:57.293 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.293 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.293 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.293 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.fM8 00:16:57.293 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.fM8 00:16:57.552 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.xpf ]] 00:16:57.552 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xpf 00:16:57.552 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.552 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.552 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.552 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xpf 00:16:57.552 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xpf 00:16:57.552 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:57.552 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BqE 00:16:57.552 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.552 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.552 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.552 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.BqE 00:16:57.552 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.BqE 00:16:57.811 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.u7L ]] 00:16:57.811 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.u7L 00:16:57.811 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.811 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.811 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.811 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.u7L 00:16:57.811 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.u7L 00:16:58.070 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:58.070 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yIR 00:16:58.070 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.070 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.070 14:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.070 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.yIR 00:16:58.070 14:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.yIR 00:16:58.328 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.BfD ]] 00:16:58.329 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BfD 00:16:58.329 14:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.329 14:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.329 14:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.329 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BfD 00:16:58.329 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BfD 00:16:58.329 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:58.329 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bPc 00:16:58.329 14:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.329 14:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.329 14:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.329 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.bPc 00:16:58.329 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.bPc 00:16:58.589 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:58.589 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:58.589 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.589 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.589 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:58.589 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:58.848 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:58.848 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.848 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:58.848 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:58.848 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:58.848 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.848 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.848 14:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.848 14:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.848 14:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.848 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.848 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.107 00:16:59.107 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.107 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.107 14:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.107 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.107 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.107 14:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.107 14:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.107 14:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.403 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.403 { 00:16:59.403 "cntlid": 1, 00:16:59.403 "qid": 0, 00:16:59.403 "state": "enabled", 00:16:59.403 "thread": "nvmf_tgt_poll_group_000", 00:16:59.403 "listen_address": { 00:16:59.403 "trtype": "TCP", 00:16:59.403 "adrfam": "IPv4", 00:16:59.403 "traddr": "10.0.0.2", 00:16:59.403 "trsvcid": "4420" 00:16:59.403 }, 00:16:59.403 "peer_address": { 00:16:59.403 "trtype": "TCP", 00:16:59.403 "adrfam": "IPv4", 00:16:59.403 "traddr": "10.0.0.1", 00:16:59.403 "trsvcid": "36446" 00:16:59.403 }, 00:16:59.403 "auth": { 00:16:59.403 "state": "completed", 00:16:59.403 "digest": "sha256", 00:16:59.403 "dhgroup": "null" 00:16:59.403 } 00:16:59.403 } 00:16:59.403 ]' 00:16:59.403 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.403 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.403 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.403 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:59.403 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.403 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.403 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.403 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.661 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:17:00.226 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.226 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.226 14:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.226 14:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.226 14:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.226 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.226 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:00.226 14:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:00.226 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:00.226 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.226 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:00.226 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:00.226 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:00.226 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.227 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.227 14:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.227 14:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.227 14:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.227 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.227 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.484 00:17:00.484 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.484 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.484 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.743 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.743 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.743 14:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.743 14:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.743 14:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.743 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.743 { 00:17:00.743 "cntlid": 3, 00:17:00.743 "qid": 0, 00:17:00.743 "state": "enabled", 00:17:00.743 "thread": "nvmf_tgt_poll_group_000", 00:17:00.743 "listen_address": { 00:17:00.743 "trtype": "TCP", 00:17:00.743 "adrfam": "IPv4", 00:17:00.743 "traddr": "10.0.0.2", 00:17:00.743 "trsvcid": "4420" 00:17:00.743 }, 00:17:00.743 "peer_address": { 00:17:00.743 "trtype": "TCP", 00:17:00.743 "adrfam": "IPv4", 00:17:00.743 "traddr": "10.0.0.1", 00:17:00.743 "trsvcid": "36476" 00:17:00.743 }, 00:17:00.743 "auth": { 00:17:00.743 "state": "completed", 00:17:00.743 "digest": "sha256", 00:17:00.743 "dhgroup": "null" 00:17:00.743 } 00:17:00.743 } 00:17:00.743 ]' 00:17:00.743 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.743 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.743 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.743 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:00.743 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.743 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.743 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.743 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.002 14:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:17:01.567 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.567 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.567 14:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.567 14:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.567 14:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.567 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.567 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:01.567 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:01.826 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:01.826 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.826 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.826 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:01.826 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:01.826 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.826 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.826 14:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.826 14:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.826 14:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.826 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.826 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.085 00:17:02.085 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.085 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.085 14:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.345 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.345 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.345 14:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.345 14:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.345 14:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.345 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.345 { 00:17:02.345 "cntlid": 5, 00:17:02.345 "qid": 0, 00:17:02.345 "state": "enabled", 00:17:02.345 "thread": "nvmf_tgt_poll_group_000", 00:17:02.345 "listen_address": { 00:17:02.345 "trtype": "TCP", 00:17:02.345 "adrfam": "IPv4", 00:17:02.345 "traddr": "10.0.0.2", 00:17:02.345 "trsvcid": "4420" 00:17:02.345 }, 00:17:02.345 "peer_address": { 00:17:02.345 "trtype": "TCP", 00:17:02.345 "adrfam": "IPv4", 00:17:02.345 "traddr": "10.0.0.1", 00:17:02.345 "trsvcid": "36490" 00:17:02.345 }, 00:17:02.345 "auth": { 00:17:02.345 "state": "completed", 00:17:02.345 "digest": "sha256", 00:17:02.345 "dhgroup": "null" 00:17:02.345 } 00:17:02.345 } 00:17:02.345 ]' 00:17:02.345 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.345 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.345 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.345 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:02.345 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.345 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.345 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.345 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.605 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:17:03.172 14:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.172 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.172 14:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.172 14:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.172 14:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.172 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.172 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:03.172 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:03.431 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:03.431 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.431 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:03.431 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:03.431 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:03.431 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.431 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:03.431 14:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.431 14:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.431 14:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.431 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.431 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.431 00:17:03.688 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.688 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.689 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.689 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.689 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.689 14:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.689 14:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.689 14:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.689 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.689 { 00:17:03.689 "cntlid": 7, 00:17:03.689 "qid": 0, 00:17:03.689 "state": "enabled", 00:17:03.689 "thread": "nvmf_tgt_poll_group_000", 00:17:03.689 "listen_address": { 00:17:03.689 "trtype": "TCP", 00:17:03.689 "adrfam": "IPv4", 00:17:03.689 "traddr": "10.0.0.2", 00:17:03.689 "trsvcid": "4420" 00:17:03.689 }, 00:17:03.689 "peer_address": { 00:17:03.689 "trtype": "TCP", 00:17:03.689 "adrfam": "IPv4", 00:17:03.689 "traddr": "10.0.0.1", 00:17:03.689 "trsvcid": "36512" 00:17:03.689 }, 00:17:03.689 "auth": { 00:17:03.689 "state": "completed", 00:17:03.689 "digest": "sha256", 00:17:03.689 "dhgroup": "null" 00:17:03.689 } 00:17:03.689 } 00:17:03.689 ]' 00:17:03.689 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.689 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.689 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.948 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:03.948 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.948 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.948 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.948 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.948 14:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:17:04.517 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.776 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.035 00:17:05.035 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.035 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.035 14:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.293 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.293 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.293 14:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.293 14:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.293 14:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.293 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.293 { 00:17:05.293 "cntlid": 9, 00:17:05.293 "qid": 0, 00:17:05.293 "state": "enabled", 00:17:05.293 "thread": "nvmf_tgt_poll_group_000", 00:17:05.293 "listen_address": { 00:17:05.293 "trtype": "TCP", 00:17:05.293 "adrfam": "IPv4", 00:17:05.293 "traddr": "10.0.0.2", 00:17:05.293 "trsvcid": "4420" 00:17:05.293 }, 00:17:05.293 "peer_address": { 00:17:05.293 "trtype": "TCP", 00:17:05.293 "adrfam": "IPv4", 00:17:05.293 "traddr": "10.0.0.1", 00:17:05.293 "trsvcid": "60182" 00:17:05.293 }, 00:17:05.293 "auth": { 00:17:05.293 "state": "completed", 00:17:05.293 "digest": "sha256", 00:17:05.293 "dhgroup": "ffdhe2048" 00:17:05.293 } 00:17:05.293 } 00:17:05.293 ]' 00:17:05.293 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.293 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.293 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.293 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:05.293 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.293 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.293 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.293 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.551 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:17:06.118 14:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.118 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.118 14:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.118 14:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.118 14:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.118 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.118 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:06.118 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:06.377 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:06.377 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.377 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:06.377 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:06.377 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:06.377 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.377 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.377 14:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.377 14:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.377 14:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.377 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.377 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.636 00:17:06.636 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.636 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.636 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.636 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.636 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.636 14:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.637 14:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.896 14:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.896 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.896 { 00:17:06.896 "cntlid": 11, 00:17:06.896 "qid": 0, 00:17:06.896 "state": "enabled", 00:17:06.896 "thread": "nvmf_tgt_poll_group_000", 00:17:06.896 "listen_address": { 00:17:06.896 "trtype": "TCP", 00:17:06.896 "adrfam": "IPv4", 00:17:06.896 "traddr": "10.0.0.2", 00:17:06.896 "trsvcid": "4420" 00:17:06.896 }, 00:17:06.896 "peer_address": { 00:17:06.896 "trtype": "TCP", 00:17:06.896 "adrfam": "IPv4", 00:17:06.896 "traddr": "10.0.0.1", 00:17:06.896 "trsvcid": "60204" 00:17:06.896 }, 00:17:06.896 "auth": { 00:17:06.896 "state": "completed", 00:17:06.896 "digest": "sha256", 00:17:06.896 "dhgroup": "ffdhe2048" 00:17:06.896 } 00:17:06.896 } 00:17:06.896 ]' 00:17:06.896 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.896 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.896 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.896 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:06.896 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.896 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.896 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.896 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.156 14:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.726 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.985 00:17:07.985 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.985 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.985 14:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.244 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.244 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.244 14:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.244 14:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.244 14:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.244 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.244 { 00:17:08.244 "cntlid": 13, 00:17:08.244 "qid": 0, 00:17:08.244 "state": "enabled", 00:17:08.244 "thread": "nvmf_tgt_poll_group_000", 00:17:08.244 "listen_address": { 00:17:08.244 "trtype": "TCP", 00:17:08.244 "adrfam": "IPv4", 00:17:08.244 "traddr": "10.0.0.2", 00:17:08.244 "trsvcid": "4420" 00:17:08.244 }, 00:17:08.244 "peer_address": { 00:17:08.244 "trtype": "TCP", 00:17:08.244 "adrfam": "IPv4", 00:17:08.244 "traddr": "10.0.0.1", 00:17:08.244 "trsvcid": "60214" 00:17:08.244 }, 00:17:08.244 "auth": { 00:17:08.244 "state": "completed", 00:17:08.244 "digest": "sha256", 00:17:08.244 "dhgroup": "ffdhe2048" 00:17:08.244 } 00:17:08.244 } 00:17:08.244 ]' 00:17:08.244 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.244 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.244 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.244 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.244 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.503 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.503 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.503 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.503 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:17:09.072 14:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.072 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.072 14:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.072 14:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.072 14:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.072 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.072 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:09.072 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:09.331 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:09.331 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.331 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:09.331 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:09.331 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:09.331 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.331 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:09.331 14:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.331 14:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.331 14:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.331 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.331 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.589 00:17:09.589 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.589 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.589 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.847 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.847 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.847 14:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.847 14:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.847 14:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.847 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.847 { 00:17:09.847 "cntlid": 15, 00:17:09.847 "qid": 0, 00:17:09.847 "state": "enabled", 00:17:09.847 "thread": "nvmf_tgt_poll_group_000", 00:17:09.847 "listen_address": { 00:17:09.847 "trtype": "TCP", 00:17:09.847 "adrfam": "IPv4", 00:17:09.847 "traddr": "10.0.0.2", 00:17:09.847 "trsvcid": "4420" 00:17:09.847 }, 00:17:09.847 "peer_address": { 00:17:09.847 "trtype": "TCP", 00:17:09.847 "adrfam": "IPv4", 00:17:09.847 "traddr": "10.0.0.1", 00:17:09.847 "trsvcid": "60244" 00:17:09.847 }, 00:17:09.847 "auth": { 00:17:09.847 "state": "completed", 00:17:09.847 "digest": "sha256", 00:17:09.847 "dhgroup": "ffdhe2048" 00:17:09.847 } 00:17:09.847 } 00:17:09.847 ]' 00:17:09.847 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.847 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.847 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.847 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.847 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.847 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.847 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.847 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.106 14:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:17:10.672 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.672 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.672 14:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.672 14:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.672 14:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.672 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.672 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.672 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.672 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.931 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:10.931 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.931 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:10.931 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:10.931 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:10.931 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.931 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.931 14:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.931 14:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.931 14:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.931 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.931 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.931 00:17:11.188 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.188 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.188 14:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.188 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.188 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.188 14:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.188 14:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.188 14:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.188 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.188 { 00:17:11.188 "cntlid": 17, 00:17:11.188 "qid": 0, 00:17:11.188 "state": "enabled", 00:17:11.188 "thread": "nvmf_tgt_poll_group_000", 00:17:11.188 "listen_address": { 00:17:11.188 "trtype": "TCP", 00:17:11.188 "adrfam": "IPv4", 00:17:11.188 "traddr": "10.0.0.2", 00:17:11.188 "trsvcid": "4420" 00:17:11.188 }, 00:17:11.188 "peer_address": { 00:17:11.188 "trtype": "TCP", 00:17:11.188 "adrfam": "IPv4", 00:17:11.188 "traddr": "10.0.0.1", 00:17:11.188 "trsvcid": "60268" 00:17:11.188 }, 00:17:11.188 "auth": { 00:17:11.188 "state": "completed", 00:17:11.188 "digest": "sha256", 00:17:11.188 "dhgroup": "ffdhe3072" 00:17:11.188 } 00:17:11.188 } 00:17:11.188 ]' 00:17:11.188 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.188 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.188 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.446 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:11.446 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.446 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.446 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.446 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.446 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:17:12.013 14:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.013 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.013 14:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.013 14:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.013 14:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.013 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.013 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:12.013 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:12.272 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:12.272 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.272 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:12.272 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:12.272 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:12.272 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.272 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.272 14:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.272 14:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.272 14:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.272 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.272 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.530 00:17:12.530 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.530 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.530 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.788 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.788 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.788 14:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.788 14:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.788 14:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.788 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.788 { 00:17:12.788 "cntlid": 19, 00:17:12.788 "qid": 0, 00:17:12.788 "state": "enabled", 00:17:12.788 "thread": "nvmf_tgt_poll_group_000", 00:17:12.788 "listen_address": { 00:17:12.788 "trtype": "TCP", 00:17:12.788 "adrfam": "IPv4", 00:17:12.788 "traddr": "10.0.0.2", 00:17:12.788 "trsvcid": "4420" 00:17:12.788 }, 00:17:12.788 "peer_address": { 00:17:12.788 "trtype": "TCP", 00:17:12.788 "adrfam": "IPv4", 00:17:12.788 "traddr": "10.0.0.1", 00:17:12.788 "trsvcid": "60302" 00:17:12.788 }, 00:17:12.788 "auth": { 00:17:12.788 "state": "completed", 00:17:12.788 "digest": "sha256", 00:17:12.788 "dhgroup": "ffdhe3072" 00:17:12.788 } 00:17:12.788 } 00:17:12.788 ]' 00:17:12.788 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.788 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.788 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.788 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:12.788 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.788 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.788 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.788 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.047 14:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:17:13.612 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.612 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.612 14:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.612 14:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.612 14:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.612 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.612 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:13.612 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:13.871 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:13.871 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.871 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:13.871 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:13.871 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:13.871 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.871 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.871 14:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.871 14:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.871 14:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.871 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.871 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.130 00:17:14.130 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.130 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.130 14:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.130 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.130 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.130 14:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.130 14:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.388 14:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.388 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.388 { 00:17:14.388 "cntlid": 21, 00:17:14.388 "qid": 0, 00:17:14.388 "state": "enabled", 00:17:14.388 "thread": "nvmf_tgt_poll_group_000", 00:17:14.388 "listen_address": { 00:17:14.388 "trtype": "TCP", 00:17:14.388 "adrfam": "IPv4", 00:17:14.388 "traddr": "10.0.0.2", 00:17:14.388 "trsvcid": "4420" 00:17:14.388 }, 00:17:14.388 "peer_address": { 00:17:14.388 "trtype": "TCP", 00:17:14.388 "adrfam": "IPv4", 00:17:14.388 "traddr": "10.0.0.1", 00:17:14.388 "trsvcid": "60332" 00:17:14.388 }, 00:17:14.388 "auth": { 00:17:14.388 "state": "completed", 00:17:14.388 "digest": "sha256", 00:17:14.388 "dhgroup": "ffdhe3072" 00:17:14.388 } 00:17:14.388 } 00:17:14.388 ]' 00:17:14.388 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.388 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.389 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.389 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.389 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.389 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.389 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.389 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.647 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:17:15.214 14:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.214 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.214 14:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.214 14:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.215 14:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.215 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.215 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:15.215 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:15.215 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:15.215 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.215 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:15.215 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:15.215 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:15.215 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.215 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:15.215 14:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.215 14:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.215 14:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.215 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.215 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.472 00:17:15.472 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.472 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.472 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.730 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.730 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.730 14:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.730 14:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.730 14:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.730 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.730 { 00:17:15.730 "cntlid": 23, 00:17:15.730 "qid": 0, 00:17:15.730 "state": "enabled", 00:17:15.730 "thread": "nvmf_tgt_poll_group_000", 00:17:15.730 "listen_address": { 00:17:15.730 "trtype": "TCP", 00:17:15.730 "adrfam": "IPv4", 00:17:15.730 "traddr": "10.0.0.2", 00:17:15.730 "trsvcid": "4420" 00:17:15.730 }, 00:17:15.730 "peer_address": { 00:17:15.730 "trtype": "TCP", 00:17:15.730 "adrfam": "IPv4", 00:17:15.730 "traddr": "10.0.0.1", 00:17:15.730 "trsvcid": "48550" 00:17:15.730 }, 00:17:15.730 "auth": { 00:17:15.730 "state": "completed", 00:17:15.730 "digest": "sha256", 00:17:15.730 "dhgroup": "ffdhe3072" 00:17:15.730 } 00:17:15.730 } 00:17:15.730 ]' 00:17:15.730 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.730 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.730 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.730 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.730 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.988 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.988 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.988 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.988 14:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:17:16.554 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.554 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.554 14:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.554 14:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.554 14:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.554 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.554 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.554 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:16.554 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:16.812 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:16.812 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.812 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:16.812 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:16.812 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:16.812 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.812 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.812 14:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.812 14:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.812 14:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.812 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.812 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.070 00:17:17.070 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.070 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.070 14:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.329 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.329 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.329 14:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.329 14:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.329 14:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.329 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.329 { 00:17:17.329 "cntlid": 25, 00:17:17.329 "qid": 0, 00:17:17.329 "state": "enabled", 00:17:17.329 "thread": "nvmf_tgt_poll_group_000", 00:17:17.329 "listen_address": { 00:17:17.329 "trtype": "TCP", 00:17:17.329 "adrfam": "IPv4", 00:17:17.329 "traddr": "10.0.0.2", 00:17:17.329 "trsvcid": "4420" 00:17:17.329 }, 00:17:17.329 "peer_address": { 00:17:17.329 "trtype": "TCP", 00:17:17.329 "adrfam": "IPv4", 00:17:17.329 "traddr": "10.0.0.1", 00:17:17.329 "trsvcid": "48570" 00:17:17.329 }, 00:17:17.329 "auth": { 00:17:17.329 "state": "completed", 00:17:17.329 "digest": "sha256", 00:17:17.329 "dhgroup": "ffdhe4096" 00:17:17.329 } 00:17:17.329 } 00:17:17.329 ]' 00:17:17.329 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.329 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.329 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.329 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:17.329 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.329 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.329 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.329 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.588 14:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:17:18.157 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.157 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.157 14:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.157 14:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.157 14:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.157 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.157 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:18.157 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:18.428 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:18.428 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.428 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:18.428 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:18.428 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:18.428 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.428 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.428 14:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.428 14:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.428 14:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.428 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.428 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.774 00:17:18.774 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.774 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.774 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.774 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.774 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.774 14:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.774 14:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.774 14:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.774 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.774 { 00:17:18.774 "cntlid": 27, 00:17:18.774 "qid": 0, 00:17:18.774 "state": "enabled", 00:17:18.774 "thread": "nvmf_tgt_poll_group_000", 00:17:18.774 "listen_address": { 00:17:18.774 "trtype": "TCP", 00:17:18.774 "adrfam": "IPv4", 00:17:18.774 "traddr": "10.0.0.2", 00:17:18.774 "trsvcid": "4420" 00:17:18.774 }, 00:17:18.774 "peer_address": { 00:17:18.774 "trtype": "TCP", 00:17:18.774 "adrfam": "IPv4", 00:17:18.774 "traddr": "10.0.0.1", 00:17:18.774 "trsvcid": "48612" 00:17:18.774 }, 00:17:18.774 "auth": { 00:17:18.774 "state": "completed", 00:17:18.774 "digest": "sha256", 00:17:18.774 "dhgroup": "ffdhe4096" 00:17:18.774 } 00:17:18.774 } 00:17:18.774 ]' 00:17:18.774 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.033 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.033 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.033 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:19.033 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.033 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.033 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.033 14:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.033 14:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:17:19.599 14:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.599 14:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.599 14:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.599 14:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.599 14:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.599 14:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.599 14:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:19.599 14:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:19.858 14:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:19.858 14:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.858 14:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:19.858 14:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:19.858 14:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:19.858 14:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.858 14:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.858 14:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.858 14:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.858 14:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.858 14:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.858 14:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.117 00:17:20.117 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.117 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.117 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.375 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.375 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.375 14:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.375 14:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.376 14:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.376 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.376 { 00:17:20.376 "cntlid": 29, 00:17:20.376 "qid": 0, 00:17:20.376 "state": "enabled", 00:17:20.376 "thread": "nvmf_tgt_poll_group_000", 00:17:20.376 "listen_address": { 00:17:20.376 "trtype": "TCP", 00:17:20.376 "adrfam": "IPv4", 00:17:20.376 "traddr": "10.0.0.2", 00:17:20.376 "trsvcid": "4420" 00:17:20.376 }, 00:17:20.376 "peer_address": { 00:17:20.376 "trtype": "TCP", 00:17:20.376 "adrfam": "IPv4", 00:17:20.376 "traddr": "10.0.0.1", 00:17:20.376 "trsvcid": "48632" 00:17:20.376 }, 00:17:20.376 "auth": { 00:17:20.376 "state": "completed", 00:17:20.376 "digest": "sha256", 00:17:20.376 "dhgroup": "ffdhe4096" 00:17:20.376 } 00:17:20.376 } 00:17:20.376 ]' 00:17:20.376 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.376 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.376 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.376 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.376 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.376 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.376 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.376 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.635 14:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:17:21.202 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.202 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.202 14:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.202 14:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.202 14:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.202 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.202 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:21.202 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:21.461 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:21.461 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.461 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:21.461 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:21.461 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:21.461 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.461 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:21.461 14:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.461 14:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.461 14:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.461 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.461 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.719 00:17:21.719 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.719 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.719 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.978 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.978 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.978 14:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.978 14:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.978 14:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.978 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.978 { 00:17:21.978 "cntlid": 31, 00:17:21.978 "qid": 0, 00:17:21.978 "state": "enabled", 00:17:21.978 "thread": "nvmf_tgt_poll_group_000", 00:17:21.978 "listen_address": { 00:17:21.978 "trtype": "TCP", 00:17:21.978 "adrfam": "IPv4", 00:17:21.978 "traddr": "10.0.0.2", 00:17:21.978 "trsvcid": "4420" 00:17:21.978 }, 00:17:21.978 "peer_address": { 00:17:21.978 "trtype": "TCP", 00:17:21.978 "adrfam": "IPv4", 00:17:21.978 "traddr": "10.0.0.1", 00:17:21.978 "trsvcid": "48652" 00:17:21.979 }, 00:17:21.979 "auth": { 00:17:21.979 "state": "completed", 00:17:21.979 "digest": "sha256", 00:17:21.979 "dhgroup": "ffdhe4096" 00:17:21.979 } 00:17:21.979 } 00:17:21.979 ]' 00:17:21.979 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.979 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.979 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.979 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.979 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.979 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.979 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.979 14:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.238 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.807 14:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.375 00:17:23.375 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.375 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.375 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.375 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.375 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.375 14:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.375 14:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.375 14:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.375 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.375 { 00:17:23.375 "cntlid": 33, 00:17:23.375 "qid": 0, 00:17:23.375 "state": "enabled", 00:17:23.375 "thread": "nvmf_tgt_poll_group_000", 00:17:23.375 "listen_address": { 00:17:23.375 "trtype": "TCP", 00:17:23.375 "adrfam": "IPv4", 00:17:23.375 "traddr": "10.0.0.2", 00:17:23.375 "trsvcid": "4420" 00:17:23.375 }, 00:17:23.375 "peer_address": { 00:17:23.375 "trtype": "TCP", 00:17:23.375 "adrfam": "IPv4", 00:17:23.375 "traddr": "10.0.0.1", 00:17:23.375 "trsvcid": "48678" 00:17:23.375 }, 00:17:23.375 "auth": { 00:17:23.375 "state": "completed", 00:17:23.375 "digest": "sha256", 00:17:23.375 "dhgroup": "ffdhe6144" 00:17:23.376 } 00:17:23.376 } 00:17:23.376 ]' 00:17:23.376 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.376 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.376 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.635 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:23.635 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.635 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.635 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.635 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.635 14:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:17:24.203 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.203 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.203 14:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.203 14:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.203 14:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.203 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.203 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:24.203 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:24.462 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:24.462 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.462 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:24.462 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:24.463 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:24.463 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.463 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.463 14:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.463 14:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.463 14:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.463 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.463 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.721 00:17:24.980 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.980 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.980 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.980 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.980 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.980 14:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.980 14:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.980 14:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.980 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.980 { 00:17:24.980 "cntlid": 35, 00:17:24.980 "qid": 0, 00:17:24.980 "state": "enabled", 00:17:24.980 "thread": "nvmf_tgt_poll_group_000", 00:17:24.980 "listen_address": { 00:17:24.980 "trtype": "TCP", 00:17:24.980 "adrfam": "IPv4", 00:17:24.980 "traddr": "10.0.0.2", 00:17:24.980 "trsvcid": "4420" 00:17:24.980 }, 00:17:24.980 "peer_address": { 00:17:24.980 "trtype": "TCP", 00:17:24.980 "adrfam": "IPv4", 00:17:24.980 "traddr": "10.0.0.1", 00:17:24.980 "trsvcid": "56014" 00:17:24.980 }, 00:17:24.980 "auth": { 00:17:24.980 "state": "completed", 00:17:24.980 "digest": "sha256", 00:17:24.980 "dhgroup": "ffdhe6144" 00:17:24.980 } 00:17:24.980 } 00:17:24.980 ]' 00:17:24.980 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.980 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.980 14:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.239 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:25.239 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.239 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.239 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.239 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.239 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:17:25.806 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.806 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.806 14:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.806 14:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.806 14:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.806 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.806 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:25.806 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:26.065 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:26.065 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.065 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:26.065 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:26.065 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:26.065 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.065 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.065 14:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.065 14:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.065 14:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.065 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.065 14:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.324 00:17:26.324 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.324 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.324 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.583 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.583 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.583 14:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.583 14:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.583 14:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.583 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.583 { 00:17:26.583 "cntlid": 37, 00:17:26.583 "qid": 0, 00:17:26.583 "state": "enabled", 00:17:26.583 "thread": "nvmf_tgt_poll_group_000", 00:17:26.583 "listen_address": { 00:17:26.583 "trtype": "TCP", 00:17:26.583 "adrfam": "IPv4", 00:17:26.583 "traddr": "10.0.0.2", 00:17:26.583 "trsvcid": "4420" 00:17:26.583 }, 00:17:26.583 "peer_address": { 00:17:26.583 "trtype": "TCP", 00:17:26.583 "adrfam": "IPv4", 00:17:26.583 "traddr": "10.0.0.1", 00:17:26.583 "trsvcid": "56050" 00:17:26.583 }, 00:17:26.583 "auth": { 00:17:26.583 "state": "completed", 00:17:26.583 "digest": "sha256", 00:17:26.583 "dhgroup": "ffdhe6144" 00:17:26.583 } 00:17:26.583 } 00:17:26.583 ]' 00:17:26.583 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.583 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.583 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.583 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:26.583 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.841 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.841 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.841 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.841 14:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:17:27.408 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.408 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.408 14:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.408 14:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.408 14:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.408 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.408 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:27.408 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:27.667 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:27.667 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.667 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:27.667 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:27.667 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:27.667 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.668 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:27.668 14:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.668 14:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.668 14:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.668 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.668 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.925 00:17:27.925 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.925 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.925 14:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.184 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.184 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.184 14:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.184 14:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.184 14:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.184 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.184 { 00:17:28.184 "cntlid": 39, 00:17:28.184 "qid": 0, 00:17:28.184 "state": "enabled", 00:17:28.184 "thread": "nvmf_tgt_poll_group_000", 00:17:28.184 "listen_address": { 00:17:28.184 "trtype": "TCP", 00:17:28.184 "adrfam": "IPv4", 00:17:28.184 "traddr": "10.0.0.2", 00:17:28.184 "trsvcid": "4420" 00:17:28.184 }, 00:17:28.184 "peer_address": { 00:17:28.184 "trtype": "TCP", 00:17:28.184 "adrfam": "IPv4", 00:17:28.184 "traddr": "10.0.0.1", 00:17:28.184 "trsvcid": "56070" 00:17:28.184 }, 00:17:28.184 "auth": { 00:17:28.184 "state": "completed", 00:17:28.184 "digest": "sha256", 00:17:28.184 "dhgroup": "ffdhe6144" 00:17:28.184 } 00:17:28.184 } 00:17:28.184 ]' 00:17:28.184 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.184 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.184 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.184 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.184 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.184 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.184 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.443 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.443 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:17:29.010 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.011 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.011 14:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.011 14:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.011 14:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.011 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.011 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.011 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:29.011 14:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:29.270 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:29.270 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.270 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:29.270 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:29.270 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:29.270 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.270 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.270 14:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.270 14:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.270 14:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.270 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.270 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.839 00:17:29.839 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.839 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.839 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.839 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.839 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.839 14:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.839 14:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.839 14:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.839 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.839 { 00:17:29.839 "cntlid": 41, 00:17:29.839 "qid": 0, 00:17:29.839 "state": "enabled", 00:17:29.839 "thread": "nvmf_tgt_poll_group_000", 00:17:29.839 "listen_address": { 00:17:29.839 "trtype": "TCP", 00:17:29.839 "adrfam": "IPv4", 00:17:29.839 "traddr": "10.0.0.2", 00:17:29.839 "trsvcid": "4420" 00:17:29.839 }, 00:17:29.839 "peer_address": { 00:17:29.839 "trtype": "TCP", 00:17:29.839 "adrfam": "IPv4", 00:17:29.839 "traddr": "10.0.0.1", 00:17:29.839 "trsvcid": "56106" 00:17:29.839 }, 00:17:29.839 "auth": { 00:17:29.839 "state": "completed", 00:17:29.839 "digest": "sha256", 00:17:29.839 "dhgroup": "ffdhe8192" 00:17:29.839 } 00:17:29.839 } 00:17:29.839 ]' 00:17:29.839 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.839 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.839 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.098 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.098 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.098 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.098 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.098 14:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.098 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:17:30.667 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.667 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.667 14:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.667 14:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.667 14:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.667 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.667 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:30.667 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:30.926 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:30.926 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.926 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:30.926 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:30.926 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:30.926 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.926 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.926 14:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.926 14:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.926 14:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.926 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.926 14:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.493 00:17:31.493 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.493 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.493 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.493 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.493 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.493 14:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.493 14:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.750 14:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.750 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.750 { 00:17:31.750 "cntlid": 43, 00:17:31.750 "qid": 0, 00:17:31.750 "state": "enabled", 00:17:31.750 "thread": "nvmf_tgt_poll_group_000", 00:17:31.750 "listen_address": { 00:17:31.750 "trtype": "TCP", 00:17:31.750 "adrfam": "IPv4", 00:17:31.750 "traddr": "10.0.0.2", 00:17:31.750 "trsvcid": "4420" 00:17:31.750 }, 00:17:31.750 "peer_address": { 00:17:31.750 "trtype": "TCP", 00:17:31.750 "adrfam": "IPv4", 00:17:31.750 "traddr": "10.0.0.1", 00:17:31.750 "trsvcid": "56118" 00:17:31.750 }, 00:17:31.750 "auth": { 00:17:31.750 "state": "completed", 00:17:31.750 "digest": "sha256", 00:17:31.750 "dhgroup": "ffdhe8192" 00:17:31.750 } 00:17:31.750 } 00:17:31.750 ]' 00:17:31.750 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.750 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.750 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.750 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:31.750 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.750 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.750 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.750 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.008 14:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.574 14:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.140 00:17:33.140 14:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.140 14:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.140 14:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.399 14:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.399 14:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.399 14:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.399 14:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.399 14:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.399 14:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.399 { 00:17:33.399 "cntlid": 45, 00:17:33.399 "qid": 0, 00:17:33.399 "state": "enabled", 00:17:33.399 "thread": "nvmf_tgt_poll_group_000", 00:17:33.399 "listen_address": { 00:17:33.399 "trtype": "TCP", 00:17:33.399 "adrfam": "IPv4", 00:17:33.399 "traddr": "10.0.0.2", 00:17:33.399 "trsvcid": "4420" 00:17:33.399 }, 00:17:33.399 "peer_address": { 00:17:33.399 "trtype": "TCP", 00:17:33.399 "adrfam": "IPv4", 00:17:33.399 "traddr": "10.0.0.1", 00:17:33.399 "trsvcid": "56154" 00:17:33.399 }, 00:17:33.399 "auth": { 00:17:33.399 "state": "completed", 00:17:33.399 "digest": "sha256", 00:17:33.399 "dhgroup": "ffdhe8192" 00:17:33.399 } 00:17:33.399 } 00:17:33.399 ]' 00:17:33.399 14:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.399 14:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.399 14:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.399 14:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:33.399 14:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.399 14:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.399 14:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.399 14:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.658 14:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:17:34.226 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.226 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.226 14:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.226 14:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.226 14:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.226 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.226 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:34.226 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:34.485 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:34.485 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.485 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:34.485 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:34.485 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:34.485 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.485 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:34.485 14:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.485 14:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.485 14:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.485 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.485 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.051 00:17:35.051 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.051 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.051 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.051 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.051 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.051 14:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.051 14:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.051 14:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.051 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.051 { 00:17:35.051 "cntlid": 47, 00:17:35.051 "qid": 0, 00:17:35.051 "state": "enabled", 00:17:35.051 "thread": "nvmf_tgt_poll_group_000", 00:17:35.051 "listen_address": { 00:17:35.051 "trtype": "TCP", 00:17:35.051 "adrfam": "IPv4", 00:17:35.051 "traddr": "10.0.0.2", 00:17:35.051 "trsvcid": "4420" 00:17:35.051 }, 00:17:35.051 "peer_address": { 00:17:35.051 "trtype": "TCP", 00:17:35.051 "adrfam": "IPv4", 00:17:35.051 "traddr": "10.0.0.1", 00:17:35.051 "trsvcid": "36478" 00:17:35.051 }, 00:17:35.051 "auth": { 00:17:35.051 "state": "completed", 00:17:35.051 "digest": "sha256", 00:17:35.051 "dhgroup": "ffdhe8192" 00:17:35.051 } 00:17:35.051 } 00:17:35.051 ]' 00:17:35.051 14:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.051 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.051 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.309 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.310 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.310 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.310 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.310 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.310 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:17:35.877 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.877 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.877 14:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.877 14:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.877 14:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.877 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:35.877 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.877 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.877 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:35.877 14:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:36.135 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:36.135 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.135 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:36.135 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:36.135 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:36.135 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.135 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.135 14:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.135 14:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.135 14:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.135 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.135 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.393 00:17:36.393 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.393 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.393 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.652 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.652 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.652 14:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.652 14:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.652 14:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.652 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.652 { 00:17:36.652 "cntlid": 49, 00:17:36.652 "qid": 0, 00:17:36.652 "state": "enabled", 00:17:36.652 "thread": "nvmf_tgt_poll_group_000", 00:17:36.652 "listen_address": { 00:17:36.652 "trtype": "TCP", 00:17:36.652 "adrfam": "IPv4", 00:17:36.652 "traddr": "10.0.0.2", 00:17:36.652 "trsvcid": "4420" 00:17:36.652 }, 00:17:36.652 "peer_address": { 00:17:36.652 "trtype": "TCP", 00:17:36.652 "adrfam": "IPv4", 00:17:36.652 "traddr": "10.0.0.1", 00:17:36.652 "trsvcid": "36510" 00:17:36.652 }, 00:17:36.652 "auth": { 00:17:36.652 "state": "completed", 00:17:36.652 "digest": "sha384", 00:17:36.652 "dhgroup": "null" 00:17:36.652 } 00:17:36.652 } 00:17:36.652 ]' 00:17:36.652 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.652 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.652 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.652 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:36.652 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.652 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.652 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.652 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.911 14:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.546 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.805 00:17:37.805 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.805 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.805 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.064 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.064 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.064 14:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.064 14:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.064 14:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.064 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.064 { 00:17:38.064 "cntlid": 51, 00:17:38.064 "qid": 0, 00:17:38.064 "state": "enabled", 00:17:38.064 "thread": "nvmf_tgt_poll_group_000", 00:17:38.064 "listen_address": { 00:17:38.064 "trtype": "TCP", 00:17:38.064 "adrfam": "IPv4", 00:17:38.064 "traddr": "10.0.0.2", 00:17:38.064 "trsvcid": "4420" 00:17:38.064 }, 00:17:38.064 "peer_address": { 00:17:38.064 "trtype": "TCP", 00:17:38.064 "adrfam": "IPv4", 00:17:38.064 "traddr": "10.0.0.1", 00:17:38.064 "trsvcid": "36544" 00:17:38.064 }, 00:17:38.064 "auth": { 00:17:38.064 "state": "completed", 00:17:38.064 "digest": "sha384", 00:17:38.064 "dhgroup": "null" 00:17:38.064 } 00:17:38.064 } 00:17:38.064 ]' 00:17:38.064 14:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.064 14:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.064 14:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.064 14:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:38.064 14:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.324 14:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.324 14:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.324 14:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.324 14:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:17:38.892 14:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.892 14:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.892 14:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.892 14:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.892 14:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.892 14:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.892 14:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:38.892 14:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:39.151 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:39.151 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.151 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:39.151 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:39.151 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:39.151 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.151 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.151 14:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.151 14:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.151 14:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.151 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.151 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.410 00:17:39.410 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.410 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.410 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.670 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.670 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.670 14:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.670 14:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.670 14:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.670 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.670 { 00:17:39.670 "cntlid": 53, 00:17:39.670 "qid": 0, 00:17:39.670 "state": "enabled", 00:17:39.670 "thread": "nvmf_tgt_poll_group_000", 00:17:39.670 "listen_address": { 00:17:39.670 "trtype": "TCP", 00:17:39.670 "adrfam": "IPv4", 00:17:39.670 "traddr": "10.0.0.2", 00:17:39.670 "trsvcid": "4420" 00:17:39.670 }, 00:17:39.670 "peer_address": { 00:17:39.670 "trtype": "TCP", 00:17:39.670 "adrfam": "IPv4", 00:17:39.670 "traddr": "10.0.0.1", 00:17:39.670 "trsvcid": "36572" 00:17:39.670 }, 00:17:39.670 "auth": { 00:17:39.670 "state": "completed", 00:17:39.670 "digest": "sha384", 00:17:39.670 "dhgroup": "null" 00:17:39.670 } 00:17:39.670 } 00:17:39.670 ]' 00:17:39.670 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.670 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.670 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.670 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:39.670 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.670 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.670 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.670 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.929 14:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.496 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.754 00:17:40.754 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.754 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.754 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.014 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.014 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.014 14:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.014 14:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.014 14:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.014 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.014 { 00:17:41.014 "cntlid": 55, 00:17:41.014 "qid": 0, 00:17:41.014 "state": "enabled", 00:17:41.014 "thread": "nvmf_tgt_poll_group_000", 00:17:41.014 "listen_address": { 00:17:41.014 "trtype": "TCP", 00:17:41.014 "adrfam": "IPv4", 00:17:41.014 "traddr": "10.0.0.2", 00:17:41.014 "trsvcid": "4420" 00:17:41.014 }, 00:17:41.014 "peer_address": { 00:17:41.014 "trtype": "TCP", 00:17:41.014 "adrfam": "IPv4", 00:17:41.014 "traddr": "10.0.0.1", 00:17:41.014 "trsvcid": "36598" 00:17:41.014 }, 00:17:41.014 "auth": { 00:17:41.014 "state": "completed", 00:17:41.014 "digest": "sha384", 00:17:41.014 "dhgroup": "null" 00:17:41.014 } 00:17:41.014 } 00:17:41.014 ]' 00:17:41.014 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.014 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.014 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.014 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:41.014 14:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.272 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.272 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.272 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.272 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:17:41.839 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.839 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.839 14:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.839 14:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.839 14:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.839 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.839 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.839 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:41.839 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:42.098 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:42.098 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.098 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.098 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:42.098 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:42.098 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.098 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.098 14:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.098 14:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.098 14:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.098 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.098 14:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.358 00:17:42.358 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.358 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.358 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.358 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.617 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.617 14:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.617 14:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.617 14:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.617 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.617 { 00:17:42.617 "cntlid": 57, 00:17:42.617 "qid": 0, 00:17:42.617 "state": "enabled", 00:17:42.617 "thread": "nvmf_tgt_poll_group_000", 00:17:42.617 "listen_address": { 00:17:42.617 "trtype": "TCP", 00:17:42.617 "adrfam": "IPv4", 00:17:42.617 "traddr": "10.0.0.2", 00:17:42.617 "trsvcid": "4420" 00:17:42.617 }, 00:17:42.617 "peer_address": { 00:17:42.617 "trtype": "TCP", 00:17:42.617 "adrfam": "IPv4", 00:17:42.617 "traddr": "10.0.0.1", 00:17:42.617 "trsvcid": "36636" 00:17:42.617 }, 00:17:42.617 "auth": { 00:17:42.617 "state": "completed", 00:17:42.617 "digest": "sha384", 00:17:42.617 "dhgroup": "ffdhe2048" 00:17:42.617 } 00:17:42.617 } 00:17:42.617 ]' 00:17:42.617 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.617 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.617 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.617 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:42.617 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.617 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.617 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.617 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.876 14:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.444 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.703 00:17:43.703 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.703 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.703 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.962 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.962 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.962 14:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.962 14:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.962 14:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.962 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.962 { 00:17:43.962 "cntlid": 59, 00:17:43.962 "qid": 0, 00:17:43.962 "state": "enabled", 00:17:43.962 "thread": "nvmf_tgt_poll_group_000", 00:17:43.962 "listen_address": { 00:17:43.962 "trtype": "TCP", 00:17:43.962 "adrfam": "IPv4", 00:17:43.962 "traddr": "10.0.0.2", 00:17:43.962 "trsvcid": "4420" 00:17:43.962 }, 00:17:43.962 "peer_address": { 00:17:43.962 "trtype": "TCP", 00:17:43.962 "adrfam": "IPv4", 00:17:43.962 "traddr": "10.0.0.1", 00:17:43.962 "trsvcid": "36666" 00:17:43.962 }, 00:17:43.962 "auth": { 00:17:43.962 "state": "completed", 00:17:43.962 "digest": "sha384", 00:17:43.962 "dhgroup": "ffdhe2048" 00:17:43.962 } 00:17:43.962 } 00:17:43.962 ]' 00:17:43.962 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.962 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.962 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.962 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:43.962 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.221 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.221 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.221 14:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.221 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:17:44.788 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.788 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.788 14:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.788 14:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.788 14:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.788 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.788 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:44.788 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:45.046 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:45.046 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.046 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:45.046 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:45.046 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:45.046 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.046 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.046 14:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.046 14:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.046 14:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.046 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.046 14:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.304 00:17:45.304 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.304 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.304 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.562 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.562 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.562 14:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.562 14:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.562 14:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.562 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.562 { 00:17:45.562 "cntlid": 61, 00:17:45.562 "qid": 0, 00:17:45.562 "state": "enabled", 00:17:45.562 "thread": "nvmf_tgt_poll_group_000", 00:17:45.562 "listen_address": { 00:17:45.562 "trtype": "TCP", 00:17:45.562 "adrfam": "IPv4", 00:17:45.562 "traddr": "10.0.0.2", 00:17:45.562 "trsvcid": "4420" 00:17:45.562 }, 00:17:45.562 "peer_address": { 00:17:45.562 "trtype": "TCP", 00:17:45.562 "adrfam": "IPv4", 00:17:45.562 "traddr": "10.0.0.1", 00:17:45.562 "trsvcid": "43264" 00:17:45.562 }, 00:17:45.562 "auth": { 00:17:45.562 "state": "completed", 00:17:45.562 "digest": "sha384", 00:17:45.562 "dhgroup": "ffdhe2048" 00:17:45.562 } 00:17:45.562 } 00:17:45.562 ]' 00:17:45.562 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.562 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.562 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.562 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:45.562 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.562 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.562 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.562 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.819 14:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:17:46.385 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.385 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.385 14:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.385 14:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.385 14:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.385 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.385 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:46.385 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:46.642 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:46.642 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.642 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:46.642 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:46.642 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:46.642 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.642 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:46.642 14:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.642 14:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.642 14:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.642 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.642 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.642 00:17:46.901 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.901 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.901 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.901 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.901 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.901 14:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.901 14:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.901 14:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.901 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.901 { 00:17:46.901 "cntlid": 63, 00:17:46.901 "qid": 0, 00:17:46.901 "state": "enabled", 00:17:46.901 "thread": "nvmf_tgt_poll_group_000", 00:17:46.901 "listen_address": { 00:17:46.901 "trtype": "TCP", 00:17:46.901 "adrfam": "IPv4", 00:17:46.901 "traddr": "10.0.0.2", 00:17:46.901 "trsvcid": "4420" 00:17:46.901 }, 00:17:46.901 "peer_address": { 00:17:46.901 "trtype": "TCP", 00:17:46.901 "adrfam": "IPv4", 00:17:46.901 "traddr": "10.0.0.1", 00:17:46.901 "trsvcid": "43288" 00:17:46.901 }, 00:17:46.901 "auth": { 00:17:46.901 "state": "completed", 00:17:46.901 "digest": "sha384", 00:17:46.901 "dhgroup": "ffdhe2048" 00:17:46.901 } 00:17:46.901 } 00:17:46.901 ]' 00:17:46.901 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.901 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.901 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.158 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:47.158 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.158 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.158 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.158 14:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.417 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:17:47.984 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.984 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.984 14:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.984 14:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.984 14:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.984 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.984 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.984 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.984 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.984 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:47.984 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.984 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:47.984 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:47.985 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:47.985 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.985 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.985 14:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.985 14:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.985 14:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.985 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.985 14:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.244 00:17:48.244 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.244 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.244 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.503 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.503 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.503 14:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.503 14:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.503 14:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.503 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.503 { 00:17:48.503 "cntlid": 65, 00:17:48.503 "qid": 0, 00:17:48.503 "state": "enabled", 00:17:48.503 "thread": "nvmf_tgt_poll_group_000", 00:17:48.503 "listen_address": { 00:17:48.503 "trtype": "TCP", 00:17:48.503 "adrfam": "IPv4", 00:17:48.503 "traddr": "10.0.0.2", 00:17:48.503 "trsvcid": "4420" 00:17:48.503 }, 00:17:48.503 "peer_address": { 00:17:48.503 "trtype": "TCP", 00:17:48.503 "adrfam": "IPv4", 00:17:48.503 "traddr": "10.0.0.1", 00:17:48.503 "trsvcid": "43316" 00:17:48.503 }, 00:17:48.503 "auth": { 00:17:48.503 "state": "completed", 00:17:48.503 "digest": "sha384", 00:17:48.503 "dhgroup": "ffdhe3072" 00:17:48.503 } 00:17:48.503 } 00:17:48.503 ]' 00:17:48.503 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.503 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.503 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.503 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.503 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.503 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.503 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.503 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.761 14:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:17:49.328 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.328 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.328 14:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.328 14:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.328 14:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.328 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.328 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:49.328 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:49.586 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:49.586 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.586 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:49.586 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:49.586 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:49.586 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.586 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.586 14:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.586 14:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.586 14:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.586 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.586 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.844 00:17:49.844 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.844 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.844 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.844 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.844 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.844 14:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.844 14:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.103 14:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.103 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.103 { 00:17:50.103 "cntlid": 67, 00:17:50.103 "qid": 0, 00:17:50.103 "state": "enabled", 00:17:50.103 "thread": "nvmf_tgt_poll_group_000", 00:17:50.103 "listen_address": { 00:17:50.103 "trtype": "TCP", 00:17:50.103 "adrfam": "IPv4", 00:17:50.103 "traddr": "10.0.0.2", 00:17:50.103 "trsvcid": "4420" 00:17:50.103 }, 00:17:50.103 "peer_address": { 00:17:50.103 "trtype": "TCP", 00:17:50.103 "adrfam": "IPv4", 00:17:50.103 "traddr": "10.0.0.1", 00:17:50.103 "trsvcid": "43330" 00:17:50.103 }, 00:17:50.103 "auth": { 00:17:50.103 "state": "completed", 00:17:50.103 "digest": "sha384", 00:17:50.103 "dhgroup": "ffdhe3072" 00:17:50.103 } 00:17:50.103 } 00:17:50.103 ]' 00:17:50.103 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.103 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.103 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.103 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:50.103 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.103 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.103 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.103 14:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.362 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:17:50.929 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.929 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.929 14:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.929 14:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.929 14:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.929 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.929 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:50.929 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:50.929 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:50.929 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.929 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:50.929 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:50.929 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:50.930 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.930 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.930 14:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.930 14:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.930 14:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.930 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.930 14:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.189 00:17:51.189 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.189 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.189 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.446 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.446 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.446 14:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.446 14:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.446 14:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.446 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.447 { 00:17:51.447 "cntlid": 69, 00:17:51.447 "qid": 0, 00:17:51.447 "state": "enabled", 00:17:51.447 "thread": "nvmf_tgt_poll_group_000", 00:17:51.447 "listen_address": { 00:17:51.447 "trtype": "TCP", 00:17:51.447 "adrfam": "IPv4", 00:17:51.447 "traddr": "10.0.0.2", 00:17:51.447 "trsvcid": "4420" 00:17:51.447 }, 00:17:51.447 "peer_address": { 00:17:51.447 "trtype": "TCP", 00:17:51.447 "adrfam": "IPv4", 00:17:51.447 "traddr": "10.0.0.1", 00:17:51.447 "trsvcid": "43354" 00:17:51.447 }, 00:17:51.447 "auth": { 00:17:51.447 "state": "completed", 00:17:51.447 "digest": "sha384", 00:17:51.447 "dhgroup": "ffdhe3072" 00:17:51.447 } 00:17:51.447 } 00:17:51.447 ]' 00:17:51.447 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.447 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.447 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.447 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:51.447 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.705 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.705 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.705 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.705 14:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:17:52.271 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.271 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.271 14:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.271 14:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.271 14:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.271 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.271 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:52.271 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:52.531 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:52.531 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.531 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:52.531 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:52.531 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:52.531 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.531 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:52.531 14:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.531 14:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.531 14:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.531 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.531 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.790 00:17:52.790 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.790 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.790 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.049 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.049 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.049 14:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.049 14:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.049 14:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.049 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.049 { 00:17:53.049 "cntlid": 71, 00:17:53.049 "qid": 0, 00:17:53.049 "state": "enabled", 00:17:53.049 "thread": "nvmf_tgt_poll_group_000", 00:17:53.049 "listen_address": { 00:17:53.049 "trtype": "TCP", 00:17:53.049 "adrfam": "IPv4", 00:17:53.049 "traddr": "10.0.0.2", 00:17:53.049 "trsvcid": "4420" 00:17:53.049 }, 00:17:53.049 "peer_address": { 00:17:53.049 "trtype": "TCP", 00:17:53.049 "adrfam": "IPv4", 00:17:53.049 "traddr": "10.0.0.1", 00:17:53.049 "trsvcid": "43384" 00:17:53.049 }, 00:17:53.049 "auth": { 00:17:53.049 "state": "completed", 00:17:53.049 "digest": "sha384", 00:17:53.049 "dhgroup": "ffdhe3072" 00:17:53.049 } 00:17:53.049 } 00:17:53.049 ]' 00:17:53.049 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.049 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.049 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.049 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:53.049 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.049 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.049 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.049 14:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.308 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:17:53.887 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.887 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.887 14:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.887 14:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.887 14:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.887 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.887 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.887 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:53.887 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.145 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:54.145 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.145 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:54.145 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:54.145 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:54.145 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.145 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.145 14:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.145 14:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.145 14:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.145 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.145 14:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.405 00:17:54.405 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.405 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.405 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.405 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.405 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.405 14:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.405 14:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.405 14:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.405 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.405 { 00:17:54.405 "cntlid": 73, 00:17:54.405 "qid": 0, 00:17:54.405 "state": "enabled", 00:17:54.405 "thread": "nvmf_tgt_poll_group_000", 00:17:54.405 "listen_address": { 00:17:54.405 "trtype": "TCP", 00:17:54.405 "adrfam": "IPv4", 00:17:54.405 "traddr": "10.0.0.2", 00:17:54.405 "trsvcid": "4420" 00:17:54.405 }, 00:17:54.405 "peer_address": { 00:17:54.405 "trtype": "TCP", 00:17:54.405 "adrfam": "IPv4", 00:17:54.405 "traddr": "10.0.0.1", 00:17:54.405 "trsvcid": "56554" 00:17:54.405 }, 00:17:54.405 "auth": { 00:17:54.405 "state": "completed", 00:17:54.405 "digest": "sha384", 00:17:54.405 "dhgroup": "ffdhe4096" 00:17:54.405 } 00:17:54.405 } 00:17:54.405 ]' 00:17:54.405 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.664 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.664 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.664 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:54.664 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.664 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.664 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.664 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.923 14:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.491 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.749 00:17:55.749 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.749 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.749 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.010 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.010 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.010 14:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.010 14:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.010 14:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.010 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.010 { 00:17:56.010 "cntlid": 75, 00:17:56.010 "qid": 0, 00:17:56.010 "state": "enabled", 00:17:56.010 "thread": "nvmf_tgt_poll_group_000", 00:17:56.010 "listen_address": { 00:17:56.010 "trtype": "TCP", 00:17:56.010 "adrfam": "IPv4", 00:17:56.010 "traddr": "10.0.0.2", 00:17:56.010 "trsvcid": "4420" 00:17:56.010 }, 00:17:56.010 "peer_address": { 00:17:56.010 "trtype": "TCP", 00:17:56.010 "adrfam": "IPv4", 00:17:56.010 "traddr": "10.0.0.1", 00:17:56.010 "trsvcid": "56574" 00:17:56.010 }, 00:17:56.010 "auth": { 00:17:56.010 "state": "completed", 00:17:56.010 "digest": "sha384", 00:17:56.010 "dhgroup": "ffdhe4096" 00:17:56.010 } 00:17:56.010 } 00:17:56.010 ]' 00:17:56.010 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.010 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.010 14:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.010 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.297 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.297 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.297 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.297 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.297 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:17:56.866 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.866 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.866 14:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.866 14:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.866 14:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.866 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.867 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:56.867 14:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:57.126 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:57.126 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.126 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:57.126 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:57.126 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:57.126 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.126 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.126 14:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.126 14:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.126 14:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.126 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.126 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.385 00:17:57.385 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.385 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.385 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.644 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.644 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.644 14:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.644 14:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.644 14:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.644 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.644 { 00:17:57.644 "cntlid": 77, 00:17:57.644 "qid": 0, 00:17:57.644 "state": "enabled", 00:17:57.644 "thread": "nvmf_tgt_poll_group_000", 00:17:57.644 "listen_address": { 00:17:57.644 "trtype": "TCP", 00:17:57.644 "adrfam": "IPv4", 00:17:57.644 "traddr": "10.0.0.2", 00:17:57.644 "trsvcid": "4420" 00:17:57.644 }, 00:17:57.644 "peer_address": { 00:17:57.644 "trtype": "TCP", 00:17:57.644 "adrfam": "IPv4", 00:17:57.644 "traddr": "10.0.0.1", 00:17:57.644 "trsvcid": "56592" 00:17:57.644 }, 00:17:57.644 "auth": { 00:17:57.644 "state": "completed", 00:17:57.644 "digest": "sha384", 00:17:57.644 "dhgroup": "ffdhe4096" 00:17:57.644 } 00:17:57.644 } 00:17:57.644 ]' 00:17:57.644 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.644 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.644 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.644 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.644 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.644 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.644 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.644 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.903 14:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:17:58.470 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.470 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:58.470 14:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.470 14:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.470 14:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.470 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.470 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:58.470 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:58.729 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:58.729 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.729 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:58.729 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:58.729 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:58.729 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.729 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:58.729 14:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.729 14:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.729 14:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.729 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.729 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.988 00:17:58.989 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.989 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.989 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.989 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.989 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.989 14:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.989 14:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.989 14:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.989 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.989 { 00:17:58.989 "cntlid": 79, 00:17:58.989 "qid": 0, 00:17:58.989 "state": "enabled", 00:17:58.989 "thread": "nvmf_tgt_poll_group_000", 00:17:58.989 "listen_address": { 00:17:58.989 "trtype": "TCP", 00:17:58.989 "adrfam": "IPv4", 00:17:58.989 "traddr": "10.0.0.2", 00:17:58.989 "trsvcid": "4420" 00:17:58.989 }, 00:17:58.989 "peer_address": { 00:17:58.989 "trtype": "TCP", 00:17:58.989 "adrfam": "IPv4", 00:17:58.989 "traddr": "10.0.0.1", 00:17:58.989 "trsvcid": "56618" 00:17:58.989 }, 00:17:58.989 "auth": { 00:17:58.989 "state": "completed", 00:17:58.989 "digest": "sha384", 00:17:58.989 "dhgroup": "ffdhe4096" 00:17:58.989 } 00:17:58.989 } 00:17:58.989 ]' 00:17:58.989 14:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.248 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.248 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.248 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.248 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.248 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.248 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.248 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.506 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:18:00.074 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.074 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.074 14:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.074 14:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.074 14:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.074 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.074 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.074 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:00.074 14:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:00.074 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:00.074 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.074 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:00.074 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:00.074 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:00.074 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.074 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.074 14:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.074 14:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.074 14:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.074 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.074 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.641 00:18:00.641 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.641 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.641 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.641 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.641 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.641 14:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.641 14:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.641 14:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.641 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.641 { 00:18:00.641 "cntlid": 81, 00:18:00.641 "qid": 0, 00:18:00.641 "state": "enabled", 00:18:00.641 "thread": "nvmf_tgt_poll_group_000", 00:18:00.641 "listen_address": { 00:18:00.641 "trtype": "TCP", 00:18:00.641 "adrfam": "IPv4", 00:18:00.641 "traddr": "10.0.0.2", 00:18:00.641 "trsvcid": "4420" 00:18:00.641 }, 00:18:00.641 "peer_address": { 00:18:00.641 "trtype": "TCP", 00:18:00.641 "adrfam": "IPv4", 00:18:00.641 "traddr": "10.0.0.1", 00:18:00.641 "trsvcid": "56654" 00:18:00.641 }, 00:18:00.641 "auth": { 00:18:00.641 "state": "completed", 00:18:00.641 "digest": "sha384", 00:18:00.641 "dhgroup": "ffdhe6144" 00:18:00.641 } 00:18:00.641 } 00:18:00.641 ]' 00:18:00.641 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.641 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.641 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.900 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.900 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.900 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.900 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.900 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.900 14:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:18:01.469 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.469 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.469 14:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.469 14:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.469 14:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.469 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.469 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:01.469 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:01.727 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:01.727 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.727 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:01.727 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:01.727 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:01.727 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.727 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.727 14:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.727 14:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.727 14:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.727 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.727 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.985 00:18:02.243 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.243 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.243 14:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.243 14:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.243 14:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.243 14:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.243 14:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.243 14:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.243 14:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.243 { 00:18:02.243 "cntlid": 83, 00:18:02.243 "qid": 0, 00:18:02.243 "state": "enabled", 00:18:02.243 "thread": "nvmf_tgt_poll_group_000", 00:18:02.243 "listen_address": { 00:18:02.243 "trtype": "TCP", 00:18:02.243 "adrfam": "IPv4", 00:18:02.243 "traddr": "10.0.0.2", 00:18:02.243 "trsvcid": "4420" 00:18:02.243 }, 00:18:02.243 "peer_address": { 00:18:02.243 "trtype": "TCP", 00:18:02.243 "adrfam": "IPv4", 00:18:02.243 "traddr": "10.0.0.1", 00:18:02.243 "trsvcid": "56686" 00:18:02.243 }, 00:18:02.243 "auth": { 00:18:02.243 "state": "completed", 00:18:02.243 "digest": "sha384", 00:18:02.243 "dhgroup": "ffdhe6144" 00:18:02.243 } 00:18:02.243 } 00:18:02.243 ]' 00:18:02.243 14:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.243 14:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.243 14:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.502 14:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:02.502 14:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.502 14:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.502 14:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.502 14:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.502 14:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:18:03.070 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.070 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:03.070 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.070 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.070 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.070 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.071 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:03.071 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:03.329 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:03.329 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.330 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:03.330 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:03.330 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:03.330 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.330 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.330 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.330 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.330 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.330 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.330 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.588 00:18:03.588 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.588 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.588 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.847 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.847 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.847 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.847 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.847 14:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.847 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.847 { 00:18:03.847 "cntlid": 85, 00:18:03.847 "qid": 0, 00:18:03.847 "state": "enabled", 00:18:03.847 "thread": "nvmf_tgt_poll_group_000", 00:18:03.847 "listen_address": { 00:18:03.847 "trtype": "TCP", 00:18:03.847 "adrfam": "IPv4", 00:18:03.847 "traddr": "10.0.0.2", 00:18:03.847 "trsvcid": "4420" 00:18:03.847 }, 00:18:03.847 "peer_address": { 00:18:03.847 "trtype": "TCP", 00:18:03.847 "adrfam": "IPv4", 00:18:03.847 "traddr": "10.0.0.1", 00:18:03.847 "trsvcid": "56710" 00:18:03.847 }, 00:18:03.847 "auth": { 00:18:03.847 "state": "completed", 00:18:03.847 "digest": "sha384", 00:18:03.847 "dhgroup": "ffdhe6144" 00:18:03.847 } 00:18:03.847 } 00:18:03.847 ]' 00:18:03.847 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.847 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.847 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.847 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:03.847 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.106 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.106 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.106 14:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.106 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:18:04.673 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.673 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.673 14:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.673 14:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.673 14:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.673 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.673 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:04.673 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:04.932 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:04.932 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.932 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:04.932 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:04.932 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:04.932 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.932 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:04.932 14:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.932 14:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.932 14:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.932 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.932 14:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.191 00:18:05.191 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.191 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.191 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.450 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.450 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.450 14:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.450 14:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.451 14:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.451 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.451 { 00:18:05.451 "cntlid": 87, 00:18:05.451 "qid": 0, 00:18:05.451 "state": "enabled", 00:18:05.451 "thread": "nvmf_tgt_poll_group_000", 00:18:05.451 "listen_address": { 00:18:05.451 "trtype": "TCP", 00:18:05.451 "adrfam": "IPv4", 00:18:05.451 "traddr": "10.0.0.2", 00:18:05.451 "trsvcid": "4420" 00:18:05.451 }, 00:18:05.451 "peer_address": { 00:18:05.451 "trtype": "TCP", 00:18:05.451 "adrfam": "IPv4", 00:18:05.451 "traddr": "10.0.0.1", 00:18:05.451 "trsvcid": "48200" 00:18:05.451 }, 00:18:05.451 "auth": { 00:18:05.451 "state": "completed", 00:18:05.451 "digest": "sha384", 00:18:05.451 "dhgroup": "ffdhe6144" 00:18:05.451 } 00:18:05.451 } 00:18:05.451 ]' 00:18:05.451 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.451 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.451 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.451 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.451 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.710 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.710 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.710 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.710 14:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:18:06.277 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.277 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.277 14:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.277 14:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.277 14:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.277 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.277 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.277 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:06.277 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:06.536 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:06.536 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.536 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:06.536 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:06.536 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:06.536 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.536 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.536 14:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.536 14:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.536 14:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.536 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.536 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.101 00:18:07.101 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.101 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.101 14:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.101 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.101 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.102 14:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.102 14:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.102 14:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.102 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.102 { 00:18:07.102 "cntlid": 89, 00:18:07.102 "qid": 0, 00:18:07.102 "state": "enabled", 00:18:07.102 "thread": "nvmf_tgt_poll_group_000", 00:18:07.102 "listen_address": { 00:18:07.102 "trtype": "TCP", 00:18:07.102 "adrfam": "IPv4", 00:18:07.102 "traddr": "10.0.0.2", 00:18:07.102 "trsvcid": "4420" 00:18:07.102 }, 00:18:07.102 "peer_address": { 00:18:07.102 "trtype": "TCP", 00:18:07.102 "adrfam": "IPv4", 00:18:07.102 "traddr": "10.0.0.1", 00:18:07.102 "trsvcid": "48218" 00:18:07.102 }, 00:18:07.102 "auth": { 00:18:07.102 "state": "completed", 00:18:07.102 "digest": "sha384", 00:18:07.102 "dhgroup": "ffdhe8192" 00:18:07.102 } 00:18:07.102 } 00:18:07.102 ]' 00:18:07.359 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.359 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.359 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.359 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:07.359 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.359 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.359 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.359 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.617 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:18:08.184 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.184 14:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.184 14:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.184 14:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.184 14:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.184 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.184 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.184 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.443 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:08.443 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.443 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:08.443 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:08.443 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:08.443 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.443 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.443 14:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.443 14:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.443 14:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.443 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.443 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.701 00:18:08.701 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.701 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.701 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.960 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.960 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.960 14:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.960 14:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.960 14:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.960 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.960 { 00:18:08.960 "cntlid": 91, 00:18:08.960 "qid": 0, 00:18:08.960 "state": "enabled", 00:18:08.960 "thread": "nvmf_tgt_poll_group_000", 00:18:08.960 "listen_address": { 00:18:08.960 "trtype": "TCP", 00:18:08.960 "adrfam": "IPv4", 00:18:08.960 "traddr": "10.0.0.2", 00:18:08.960 "trsvcid": "4420" 00:18:08.960 }, 00:18:08.960 "peer_address": { 00:18:08.960 "trtype": "TCP", 00:18:08.960 "adrfam": "IPv4", 00:18:08.960 "traddr": "10.0.0.1", 00:18:08.960 "trsvcid": "48234" 00:18:08.960 }, 00:18:08.960 "auth": { 00:18:08.960 "state": "completed", 00:18:08.960 "digest": "sha384", 00:18:08.960 "dhgroup": "ffdhe8192" 00:18:08.960 } 00:18:08.960 } 00:18:08.960 ]' 00:18:08.960 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.960 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.960 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.960 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:08.960 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.219 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.219 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.219 14:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.219 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:18:09.786 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.786 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:09.786 14:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.786 14:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.786 14:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.786 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.786 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:09.786 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:10.045 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:10.045 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.045 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:10.045 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:10.045 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:10.045 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.045 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.045 14:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.045 14:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.045 14:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.045 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.045 14:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.612 00:18:10.612 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.612 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.612 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.612 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.612 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.612 14:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.612 14:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.612 14:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.612 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.612 { 00:18:10.612 "cntlid": 93, 00:18:10.612 "qid": 0, 00:18:10.612 "state": "enabled", 00:18:10.612 "thread": "nvmf_tgt_poll_group_000", 00:18:10.612 "listen_address": { 00:18:10.612 "trtype": "TCP", 00:18:10.612 "adrfam": "IPv4", 00:18:10.612 "traddr": "10.0.0.2", 00:18:10.612 "trsvcid": "4420" 00:18:10.612 }, 00:18:10.612 "peer_address": { 00:18:10.612 "trtype": "TCP", 00:18:10.612 "adrfam": "IPv4", 00:18:10.612 "traddr": "10.0.0.1", 00:18:10.612 "trsvcid": "48264" 00:18:10.612 }, 00:18:10.612 "auth": { 00:18:10.612 "state": "completed", 00:18:10.612 "digest": "sha384", 00:18:10.612 "dhgroup": "ffdhe8192" 00:18:10.612 } 00:18:10.612 } 00:18:10.612 ]' 00:18:10.612 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.871 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.871 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.871 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.871 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.871 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.871 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.871 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.137 14:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.707 14:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.275 00:18:12.275 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.275 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.275 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.535 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.535 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.535 14:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.535 14:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.535 14:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.535 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.535 { 00:18:12.535 "cntlid": 95, 00:18:12.535 "qid": 0, 00:18:12.535 "state": "enabled", 00:18:12.535 "thread": "nvmf_tgt_poll_group_000", 00:18:12.535 "listen_address": { 00:18:12.535 "trtype": "TCP", 00:18:12.535 "adrfam": "IPv4", 00:18:12.535 "traddr": "10.0.0.2", 00:18:12.535 "trsvcid": "4420" 00:18:12.535 }, 00:18:12.535 "peer_address": { 00:18:12.535 "trtype": "TCP", 00:18:12.535 "adrfam": "IPv4", 00:18:12.535 "traddr": "10.0.0.1", 00:18:12.535 "trsvcid": "48292" 00:18:12.535 }, 00:18:12.535 "auth": { 00:18:12.535 "state": "completed", 00:18:12.535 "digest": "sha384", 00:18:12.535 "dhgroup": "ffdhe8192" 00:18:12.535 } 00:18:12.535 } 00:18:12.535 ]' 00:18:12.535 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.535 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.535 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.535 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.535 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.535 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.535 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.535 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.794 14:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:18:13.362 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.362 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:13.362 14:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.362 14:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.362 14:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.362 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:13.362 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.362 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.362 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:13.362 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:13.620 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:13.620 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.620 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:13.620 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:13.620 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:13.620 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.620 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.620 14:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.620 14:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.620 14:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.620 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.620 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.620 00:18:13.620 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.620 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.620 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.878 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.878 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.878 14:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.878 14:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.878 14:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.878 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.878 { 00:18:13.878 "cntlid": 97, 00:18:13.878 "qid": 0, 00:18:13.878 "state": "enabled", 00:18:13.878 "thread": "nvmf_tgt_poll_group_000", 00:18:13.878 "listen_address": { 00:18:13.878 "trtype": "TCP", 00:18:13.878 "adrfam": "IPv4", 00:18:13.878 "traddr": "10.0.0.2", 00:18:13.878 "trsvcid": "4420" 00:18:13.878 }, 00:18:13.878 "peer_address": { 00:18:13.878 "trtype": "TCP", 00:18:13.878 "adrfam": "IPv4", 00:18:13.878 "traddr": "10.0.0.1", 00:18:13.878 "trsvcid": "48304" 00:18:13.878 }, 00:18:13.878 "auth": { 00:18:13.878 "state": "completed", 00:18:13.878 "digest": "sha512", 00:18:13.878 "dhgroup": "null" 00:18:13.878 } 00:18:13.878 } 00:18:13.878 ]' 00:18:13.878 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.878 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.878 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.878 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:13.878 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.137 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.137 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.137 14:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.137 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:18:14.704 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.704 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:14.704 14:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.704 14:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.704 14:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.704 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.704 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:14.704 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:14.962 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:14.962 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.962 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:14.962 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:14.962 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:14.962 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.962 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.963 14:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.963 14:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.963 14:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.963 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.963 14:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.293 00:18:15.293 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.293 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.293 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.293 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.553 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.553 14:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.553 14:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.553 14:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.553 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.553 { 00:18:15.553 "cntlid": 99, 00:18:15.553 "qid": 0, 00:18:15.553 "state": "enabled", 00:18:15.553 "thread": "nvmf_tgt_poll_group_000", 00:18:15.553 "listen_address": { 00:18:15.553 "trtype": "TCP", 00:18:15.553 "adrfam": "IPv4", 00:18:15.553 "traddr": "10.0.0.2", 00:18:15.553 "trsvcid": "4420" 00:18:15.553 }, 00:18:15.553 "peer_address": { 00:18:15.553 "trtype": "TCP", 00:18:15.553 "adrfam": "IPv4", 00:18:15.553 "traddr": "10.0.0.1", 00:18:15.553 "trsvcid": "59376" 00:18:15.553 }, 00:18:15.553 "auth": { 00:18:15.553 "state": "completed", 00:18:15.553 "digest": "sha512", 00:18:15.553 "dhgroup": "null" 00:18:15.553 } 00:18:15.553 } 00:18:15.553 ]' 00:18:15.553 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.553 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.553 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.553 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:15.553 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.553 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.553 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.553 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.813 14:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.382 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.641 00:18:16.641 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.641 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.641 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.901 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.901 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.901 14:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.901 14:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.901 14:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.901 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.901 { 00:18:16.901 "cntlid": 101, 00:18:16.901 "qid": 0, 00:18:16.901 "state": "enabled", 00:18:16.901 "thread": "nvmf_tgt_poll_group_000", 00:18:16.901 "listen_address": { 00:18:16.901 "trtype": "TCP", 00:18:16.901 "adrfam": "IPv4", 00:18:16.901 "traddr": "10.0.0.2", 00:18:16.901 "trsvcid": "4420" 00:18:16.901 }, 00:18:16.901 "peer_address": { 00:18:16.901 "trtype": "TCP", 00:18:16.901 "adrfam": "IPv4", 00:18:16.901 "traddr": "10.0.0.1", 00:18:16.901 "trsvcid": "59390" 00:18:16.901 }, 00:18:16.901 "auth": { 00:18:16.901 "state": "completed", 00:18:16.901 "digest": "sha512", 00:18:16.901 "dhgroup": "null" 00:18:16.901 } 00:18:16.901 } 00:18:16.901 ]' 00:18:16.901 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.901 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.901 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.901 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:16.901 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.901 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.901 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.901 14:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.161 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:18:17.728 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.728 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:17.728 14:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.728 14:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.728 14:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.728 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.728 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:17.728 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:17.987 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:17.987 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.987 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.987 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:17.987 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:17.987 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.987 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:17.987 14:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.987 14:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.987 14:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.987 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.987 14:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.244 00:18:18.244 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.244 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.244 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.504 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.504 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.504 14:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.504 14:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.504 14:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.504 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.504 { 00:18:18.504 "cntlid": 103, 00:18:18.504 "qid": 0, 00:18:18.504 "state": "enabled", 00:18:18.504 "thread": "nvmf_tgt_poll_group_000", 00:18:18.504 "listen_address": { 00:18:18.504 "trtype": "TCP", 00:18:18.504 "adrfam": "IPv4", 00:18:18.504 "traddr": "10.0.0.2", 00:18:18.504 "trsvcid": "4420" 00:18:18.504 }, 00:18:18.504 "peer_address": { 00:18:18.504 "trtype": "TCP", 00:18:18.504 "adrfam": "IPv4", 00:18:18.504 "traddr": "10.0.0.1", 00:18:18.504 "trsvcid": "59424" 00:18:18.504 }, 00:18:18.504 "auth": { 00:18:18.504 "state": "completed", 00:18:18.504 "digest": "sha512", 00:18:18.504 "dhgroup": "null" 00:18:18.504 } 00:18:18.504 } 00:18:18.504 ]' 00:18:18.504 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.504 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.504 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.504 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:18.504 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.504 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.504 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.504 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.763 14:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.330 14:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.588 14:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.588 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.588 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.588 00:18:19.588 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.588 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.588 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.846 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.846 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.846 14:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.846 14:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.846 14:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.846 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.846 { 00:18:19.846 "cntlid": 105, 00:18:19.846 "qid": 0, 00:18:19.846 "state": "enabled", 00:18:19.846 "thread": "nvmf_tgt_poll_group_000", 00:18:19.846 "listen_address": { 00:18:19.846 "trtype": "TCP", 00:18:19.846 "adrfam": "IPv4", 00:18:19.846 "traddr": "10.0.0.2", 00:18:19.846 "trsvcid": "4420" 00:18:19.846 }, 00:18:19.846 "peer_address": { 00:18:19.846 "trtype": "TCP", 00:18:19.846 "adrfam": "IPv4", 00:18:19.846 "traddr": "10.0.0.1", 00:18:19.846 "trsvcid": "59464" 00:18:19.846 }, 00:18:19.846 "auth": { 00:18:19.846 "state": "completed", 00:18:19.846 "digest": "sha512", 00:18:19.846 "dhgroup": "ffdhe2048" 00:18:19.846 } 00:18:19.846 } 00:18:19.846 ]' 00:18:19.846 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.846 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.846 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.104 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:20.104 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.104 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.104 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.104 14:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.104 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:18:20.670 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.670 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:20.670 14:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.670 14:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.670 14:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.670 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.670 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:20.670 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:20.928 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:20.928 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.928 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:20.928 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:20.928 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:20.928 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.928 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.928 14:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.928 14:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.928 14:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.928 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.928 14:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.189 00:18:21.189 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.189 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.189 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.448 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.448 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.448 14:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.448 14:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.448 14:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.448 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.448 { 00:18:21.448 "cntlid": 107, 00:18:21.448 "qid": 0, 00:18:21.448 "state": "enabled", 00:18:21.448 "thread": "nvmf_tgt_poll_group_000", 00:18:21.448 "listen_address": { 00:18:21.448 "trtype": "TCP", 00:18:21.448 "adrfam": "IPv4", 00:18:21.448 "traddr": "10.0.0.2", 00:18:21.448 "trsvcid": "4420" 00:18:21.448 }, 00:18:21.448 "peer_address": { 00:18:21.448 "trtype": "TCP", 00:18:21.448 "adrfam": "IPv4", 00:18:21.448 "traddr": "10.0.0.1", 00:18:21.448 "trsvcid": "59494" 00:18:21.448 }, 00:18:21.448 "auth": { 00:18:21.448 "state": "completed", 00:18:21.448 "digest": "sha512", 00:18:21.448 "dhgroup": "ffdhe2048" 00:18:21.448 } 00:18:21.448 } 00:18:21.448 ]' 00:18:21.448 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.448 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.448 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.448 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:21.448 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.448 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.448 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.448 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.707 14:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:18:22.275 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.275 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.275 14:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.275 14:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.275 14:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.275 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.275 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:22.275 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:22.535 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:22.535 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.535 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:22.535 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:22.535 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:22.535 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.535 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.535 14:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.535 14:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.535 14:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.535 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.535 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.795 00:18:22.795 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.795 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.795 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.795 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.795 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.795 14:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.795 14:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.795 14:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.795 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.795 { 00:18:22.795 "cntlid": 109, 00:18:22.795 "qid": 0, 00:18:22.795 "state": "enabled", 00:18:22.795 "thread": "nvmf_tgt_poll_group_000", 00:18:22.795 "listen_address": { 00:18:22.795 "trtype": "TCP", 00:18:22.795 "adrfam": "IPv4", 00:18:22.795 "traddr": "10.0.0.2", 00:18:22.795 "trsvcid": "4420" 00:18:22.795 }, 00:18:22.795 "peer_address": { 00:18:22.795 "trtype": "TCP", 00:18:22.795 "adrfam": "IPv4", 00:18:22.795 "traddr": "10.0.0.1", 00:18:22.795 "trsvcid": "59522" 00:18:22.795 }, 00:18:22.795 "auth": { 00:18:22.795 "state": "completed", 00:18:22.795 "digest": "sha512", 00:18:22.795 "dhgroup": "ffdhe2048" 00:18:22.795 } 00:18:22.795 } 00:18:22.795 ]' 00:18:22.795 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.795 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.795 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.055 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:23.055 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.055 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.055 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.055 14:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.055 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:18:23.623 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.623 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:23.623 14:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.623 14:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.623 14:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.623 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.623 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:23.623 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:23.883 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:23.883 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.883 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:23.883 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:23.883 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:23.883 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.883 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:23.883 14:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.883 14:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.883 14:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.883 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.883 14:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.143 00:18:24.143 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.143 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.143 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.401 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.401 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.401 14:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.401 14:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.401 14:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.401 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.401 { 00:18:24.401 "cntlid": 111, 00:18:24.401 "qid": 0, 00:18:24.401 "state": "enabled", 00:18:24.401 "thread": "nvmf_tgt_poll_group_000", 00:18:24.401 "listen_address": { 00:18:24.401 "trtype": "TCP", 00:18:24.401 "adrfam": "IPv4", 00:18:24.401 "traddr": "10.0.0.2", 00:18:24.401 "trsvcid": "4420" 00:18:24.401 }, 00:18:24.401 "peer_address": { 00:18:24.401 "trtype": "TCP", 00:18:24.401 "adrfam": "IPv4", 00:18:24.401 "traddr": "10.0.0.1", 00:18:24.401 "trsvcid": "59554" 00:18:24.401 }, 00:18:24.401 "auth": { 00:18:24.401 "state": "completed", 00:18:24.401 "digest": "sha512", 00:18:24.401 "dhgroup": "ffdhe2048" 00:18:24.401 } 00:18:24.401 } 00:18:24.401 ]' 00:18:24.401 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.401 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.401 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.401 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:24.401 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.401 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.402 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.402 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.660 14:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:18:25.228 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.228 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:25.228 14:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.228 14:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.228 14:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.228 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.228 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.228 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:25.228 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:25.488 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:25.488 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.488 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:25.488 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:25.488 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:25.488 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.488 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.488 14:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.488 14:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.488 14:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.488 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.488 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.747 00:18:25.747 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.747 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.747 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.747 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.747 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.747 14:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.747 14:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.747 14:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.747 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.747 { 00:18:25.747 "cntlid": 113, 00:18:25.747 "qid": 0, 00:18:25.747 "state": "enabled", 00:18:25.747 "thread": "nvmf_tgt_poll_group_000", 00:18:25.747 "listen_address": { 00:18:25.747 "trtype": "TCP", 00:18:25.747 "adrfam": "IPv4", 00:18:25.747 "traddr": "10.0.0.2", 00:18:25.747 "trsvcid": "4420" 00:18:25.747 }, 00:18:25.747 "peer_address": { 00:18:25.747 "trtype": "TCP", 00:18:25.747 "adrfam": "IPv4", 00:18:25.747 "traddr": "10.0.0.1", 00:18:25.747 "trsvcid": "58874" 00:18:25.747 }, 00:18:25.747 "auth": { 00:18:25.747 "state": "completed", 00:18:25.747 "digest": "sha512", 00:18:25.747 "dhgroup": "ffdhe3072" 00:18:25.747 } 00:18:25.747 } 00:18:25.747 ]' 00:18:25.747 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.007 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.007 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.007 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:26.007 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.007 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.007 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.007 14:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.266 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:18:26.835 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.835 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:26.835 14:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.835 14:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.835 14:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.835 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.835 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:26.835 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:26.836 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:26.836 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.836 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:26.836 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:26.836 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:26.836 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.836 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.836 14:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.836 14:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.836 14:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.836 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.836 14:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.095 00:18:27.095 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.095 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.095 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.355 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.355 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.355 14:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.355 14:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.355 14:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.355 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.355 { 00:18:27.355 "cntlid": 115, 00:18:27.355 "qid": 0, 00:18:27.355 "state": "enabled", 00:18:27.355 "thread": "nvmf_tgt_poll_group_000", 00:18:27.355 "listen_address": { 00:18:27.355 "trtype": "TCP", 00:18:27.355 "adrfam": "IPv4", 00:18:27.355 "traddr": "10.0.0.2", 00:18:27.355 "trsvcid": "4420" 00:18:27.355 }, 00:18:27.355 "peer_address": { 00:18:27.355 "trtype": "TCP", 00:18:27.355 "adrfam": "IPv4", 00:18:27.355 "traddr": "10.0.0.1", 00:18:27.355 "trsvcid": "58904" 00:18:27.355 }, 00:18:27.355 "auth": { 00:18:27.355 "state": "completed", 00:18:27.355 "digest": "sha512", 00:18:27.355 "dhgroup": "ffdhe3072" 00:18:27.355 } 00:18:27.355 } 00:18:27.355 ]' 00:18:27.355 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.355 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.355 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.355 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:27.355 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.355 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.355 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.355 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.614 14:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:18:28.183 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.183 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:28.183 14:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.183 14:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.183 14:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.183 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.183 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:28.183 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:28.465 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:28.465 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.465 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:28.465 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:28.465 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:28.465 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.465 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.465 14:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.465 14:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.465 14:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.465 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.465 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.723 00:18:28.723 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.723 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.723 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.723 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.723 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.723 14:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.724 14:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.724 14:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.724 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.724 { 00:18:28.724 "cntlid": 117, 00:18:28.724 "qid": 0, 00:18:28.724 "state": "enabled", 00:18:28.724 "thread": "nvmf_tgt_poll_group_000", 00:18:28.724 "listen_address": { 00:18:28.724 "trtype": "TCP", 00:18:28.724 "adrfam": "IPv4", 00:18:28.724 "traddr": "10.0.0.2", 00:18:28.724 "trsvcid": "4420" 00:18:28.724 }, 00:18:28.724 "peer_address": { 00:18:28.724 "trtype": "TCP", 00:18:28.724 "adrfam": "IPv4", 00:18:28.724 "traddr": "10.0.0.1", 00:18:28.724 "trsvcid": "58930" 00:18:28.724 }, 00:18:28.724 "auth": { 00:18:28.724 "state": "completed", 00:18:28.724 "digest": "sha512", 00:18:28.724 "dhgroup": "ffdhe3072" 00:18:28.724 } 00:18:28.724 } 00:18:28.724 ]' 00:18:28.724 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.983 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.983 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.983 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:28.983 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.983 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.983 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.983 14:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.242 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.809 14:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.068 00:18:30.068 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.068 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.068 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.327 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.327 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.327 14:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.327 14:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.327 14:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.327 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.327 { 00:18:30.327 "cntlid": 119, 00:18:30.327 "qid": 0, 00:18:30.327 "state": "enabled", 00:18:30.327 "thread": "nvmf_tgt_poll_group_000", 00:18:30.327 "listen_address": { 00:18:30.327 "trtype": "TCP", 00:18:30.327 "adrfam": "IPv4", 00:18:30.327 "traddr": "10.0.0.2", 00:18:30.327 "trsvcid": "4420" 00:18:30.327 }, 00:18:30.327 "peer_address": { 00:18:30.327 "trtype": "TCP", 00:18:30.327 "adrfam": "IPv4", 00:18:30.327 "traddr": "10.0.0.1", 00:18:30.327 "trsvcid": "58956" 00:18:30.327 }, 00:18:30.327 "auth": { 00:18:30.327 "state": "completed", 00:18:30.327 "digest": "sha512", 00:18:30.327 "dhgroup": "ffdhe3072" 00:18:30.327 } 00:18:30.327 } 00:18:30.327 ]' 00:18:30.327 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.327 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.327 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.327 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:30.327 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.327 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.327 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.327 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.587 14:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:18:31.164 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.164 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:31.164 14:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.164 14:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.164 14:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.164 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.164 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.164 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:31.164 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:31.427 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:31.427 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.427 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:31.427 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:31.427 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:31.427 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.427 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.427 14:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.427 14:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.427 14:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.427 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.427 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.685 00:18:31.685 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.685 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.685 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.685 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.685 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.685 14:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.685 14:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.685 14:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.685 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.685 { 00:18:31.685 "cntlid": 121, 00:18:31.685 "qid": 0, 00:18:31.685 "state": "enabled", 00:18:31.685 "thread": "nvmf_tgt_poll_group_000", 00:18:31.685 "listen_address": { 00:18:31.685 "trtype": "TCP", 00:18:31.685 "adrfam": "IPv4", 00:18:31.685 "traddr": "10.0.0.2", 00:18:31.685 "trsvcid": "4420" 00:18:31.685 }, 00:18:31.685 "peer_address": { 00:18:31.685 "trtype": "TCP", 00:18:31.685 "adrfam": "IPv4", 00:18:31.685 "traddr": "10.0.0.1", 00:18:31.685 "trsvcid": "58988" 00:18:31.685 }, 00:18:31.685 "auth": { 00:18:31.685 "state": "completed", 00:18:31.685 "digest": "sha512", 00:18:31.685 "dhgroup": "ffdhe4096" 00:18:31.685 } 00:18:31.685 } 00:18:31.685 ]' 00:18:31.686 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.943 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.943 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.943 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:31.943 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.943 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.943 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.943 14:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.202 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.770 14:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.029 00:18:33.287 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.287 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.287 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.287 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.287 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.287 14:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.287 14:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.287 14:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.287 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.287 { 00:18:33.287 "cntlid": 123, 00:18:33.287 "qid": 0, 00:18:33.287 "state": "enabled", 00:18:33.287 "thread": "nvmf_tgt_poll_group_000", 00:18:33.287 "listen_address": { 00:18:33.287 "trtype": "TCP", 00:18:33.287 "adrfam": "IPv4", 00:18:33.287 "traddr": "10.0.0.2", 00:18:33.287 "trsvcid": "4420" 00:18:33.287 }, 00:18:33.287 "peer_address": { 00:18:33.287 "trtype": "TCP", 00:18:33.287 "adrfam": "IPv4", 00:18:33.287 "traddr": "10.0.0.1", 00:18:33.287 "trsvcid": "59010" 00:18:33.287 }, 00:18:33.287 "auth": { 00:18:33.287 "state": "completed", 00:18:33.287 "digest": "sha512", 00:18:33.287 "dhgroup": "ffdhe4096" 00:18:33.287 } 00:18:33.287 } 00:18:33.287 ]' 00:18:33.287 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.287 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.287 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.545 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:33.545 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.545 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.545 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.545 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.545 14:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:18:34.114 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.415 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.688 00:18:34.688 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.688 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.688 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.947 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.947 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.947 14:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.947 14:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.947 14:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.947 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.947 { 00:18:34.947 "cntlid": 125, 00:18:34.947 "qid": 0, 00:18:34.947 "state": "enabled", 00:18:34.947 "thread": "nvmf_tgt_poll_group_000", 00:18:34.947 "listen_address": { 00:18:34.947 "trtype": "TCP", 00:18:34.947 "adrfam": "IPv4", 00:18:34.947 "traddr": "10.0.0.2", 00:18:34.947 "trsvcid": "4420" 00:18:34.947 }, 00:18:34.947 "peer_address": { 00:18:34.947 "trtype": "TCP", 00:18:34.947 "adrfam": "IPv4", 00:18:34.947 "traddr": "10.0.0.1", 00:18:34.947 "trsvcid": "40292" 00:18:34.947 }, 00:18:34.947 "auth": { 00:18:34.947 "state": "completed", 00:18:34.947 "digest": "sha512", 00:18:34.947 "dhgroup": "ffdhe4096" 00:18:34.947 } 00:18:34.947 } 00:18:34.947 ]' 00:18:34.947 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.947 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.947 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.947 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.947 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.947 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.947 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.947 14:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.206 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:18:35.773 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.773 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:35.773 14:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.773 14:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.773 14:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.773 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.773 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:35.773 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:36.031 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:36.031 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.031 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:36.031 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:36.031 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:36.031 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.031 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:36.031 14:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.031 14:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.031 14:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.032 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.032 14:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.290 00:18:36.290 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.290 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.290 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.290 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.290 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.290 14:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.290 14:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.290 14:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.290 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.290 { 00:18:36.290 "cntlid": 127, 00:18:36.290 "qid": 0, 00:18:36.290 "state": "enabled", 00:18:36.290 "thread": "nvmf_tgt_poll_group_000", 00:18:36.290 "listen_address": { 00:18:36.290 "trtype": "TCP", 00:18:36.290 "adrfam": "IPv4", 00:18:36.290 "traddr": "10.0.0.2", 00:18:36.290 "trsvcid": "4420" 00:18:36.290 }, 00:18:36.290 "peer_address": { 00:18:36.290 "trtype": "TCP", 00:18:36.290 "adrfam": "IPv4", 00:18:36.290 "traddr": "10.0.0.1", 00:18:36.290 "trsvcid": "40308" 00:18:36.290 }, 00:18:36.290 "auth": { 00:18:36.290 "state": "completed", 00:18:36.290 "digest": "sha512", 00:18:36.290 "dhgroup": "ffdhe4096" 00:18:36.290 } 00:18:36.290 } 00:18:36.290 ]' 00:18:36.290 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.550 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.550 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.550 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:36.550 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.550 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.550 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.550 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.809 14:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.377 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.946 00:18:37.946 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.946 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.946 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.946 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.946 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.946 14:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.946 14:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.946 14:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.946 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.946 { 00:18:37.946 "cntlid": 129, 00:18:37.946 "qid": 0, 00:18:37.946 "state": "enabled", 00:18:37.946 "thread": "nvmf_tgt_poll_group_000", 00:18:37.946 "listen_address": { 00:18:37.946 "trtype": "TCP", 00:18:37.946 "adrfam": "IPv4", 00:18:37.946 "traddr": "10.0.0.2", 00:18:37.946 "trsvcid": "4420" 00:18:37.946 }, 00:18:37.946 "peer_address": { 00:18:37.946 "trtype": "TCP", 00:18:37.946 "adrfam": "IPv4", 00:18:37.946 "traddr": "10.0.0.1", 00:18:37.946 "trsvcid": "40334" 00:18:37.946 }, 00:18:37.946 "auth": { 00:18:37.946 "state": "completed", 00:18:37.946 "digest": "sha512", 00:18:37.946 "dhgroup": "ffdhe6144" 00:18:37.946 } 00:18:37.946 } 00:18:37.946 ]' 00:18:37.946 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.946 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.946 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.204 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:38.204 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.204 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.204 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.204 14:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.204 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:18:38.773 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.773 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:38.773 14:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.773 14:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.773 14:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.773 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.773 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:38.773 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:39.032 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:39.032 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.032 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:39.032 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:39.032 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:39.032 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.032 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.032 14:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.032 14:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.032 14:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.032 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.032 14:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.290 00:18:39.549 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.549 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.549 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.549 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.549 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.549 14:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.549 14:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.549 14:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.549 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.549 { 00:18:39.549 "cntlid": 131, 00:18:39.549 "qid": 0, 00:18:39.549 "state": "enabled", 00:18:39.549 "thread": "nvmf_tgt_poll_group_000", 00:18:39.549 "listen_address": { 00:18:39.549 "trtype": "TCP", 00:18:39.549 "adrfam": "IPv4", 00:18:39.549 "traddr": "10.0.0.2", 00:18:39.549 "trsvcid": "4420" 00:18:39.549 }, 00:18:39.549 "peer_address": { 00:18:39.549 "trtype": "TCP", 00:18:39.549 "adrfam": "IPv4", 00:18:39.549 "traddr": "10.0.0.1", 00:18:39.549 "trsvcid": "40364" 00:18:39.549 }, 00:18:39.549 "auth": { 00:18:39.549 "state": "completed", 00:18:39.549 "digest": "sha512", 00:18:39.549 "dhgroup": "ffdhe6144" 00:18:39.549 } 00:18:39.549 } 00:18:39.549 ]' 00:18:39.549 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.549 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.549 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.808 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:39.808 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.808 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.808 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.808 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.808 14:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:18:40.377 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.377 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:40.377 14:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.377 14:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.377 14:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.377 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.377 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:40.377 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:40.637 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:40.637 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.637 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:40.637 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:40.637 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.637 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.637 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.637 14:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.637 14:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.637 14:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.637 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.637 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.895 00:18:40.895 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.895 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.895 14:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.153 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.153 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.153 14:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.153 14:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.153 14:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.153 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.153 { 00:18:41.153 "cntlid": 133, 00:18:41.153 "qid": 0, 00:18:41.153 "state": "enabled", 00:18:41.153 "thread": "nvmf_tgt_poll_group_000", 00:18:41.153 "listen_address": { 00:18:41.153 "trtype": "TCP", 00:18:41.153 "adrfam": "IPv4", 00:18:41.153 "traddr": "10.0.0.2", 00:18:41.153 "trsvcid": "4420" 00:18:41.153 }, 00:18:41.153 "peer_address": { 00:18:41.153 "trtype": "TCP", 00:18:41.153 "adrfam": "IPv4", 00:18:41.153 "traddr": "10.0.0.1", 00:18:41.153 "trsvcid": "40394" 00:18:41.153 }, 00:18:41.153 "auth": { 00:18:41.153 "state": "completed", 00:18:41.153 "digest": "sha512", 00:18:41.153 "dhgroup": "ffdhe6144" 00:18:41.153 } 00:18:41.153 } 00:18:41.153 ]' 00:18:41.153 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.153 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.153 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.153 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:41.153 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.412 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.412 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.412 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.412 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:18:41.980 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.980 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:41.980 14:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.980 14:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.980 14:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.980 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.980 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:41.980 14:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:42.238 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:42.238 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.238 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:42.238 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:42.238 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.238 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.238 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:42.238 14:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.238 14:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.238 14:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.238 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.238 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.497 00:18:42.497 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.497 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.497 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.757 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.757 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.757 14:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.757 14:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.757 14:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.757 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.757 { 00:18:42.757 "cntlid": 135, 00:18:42.757 "qid": 0, 00:18:42.757 "state": "enabled", 00:18:42.757 "thread": "nvmf_tgt_poll_group_000", 00:18:42.757 "listen_address": { 00:18:42.757 "trtype": "TCP", 00:18:42.757 "adrfam": "IPv4", 00:18:42.757 "traddr": "10.0.0.2", 00:18:42.757 "trsvcid": "4420" 00:18:42.757 }, 00:18:42.757 "peer_address": { 00:18:42.757 "trtype": "TCP", 00:18:42.757 "adrfam": "IPv4", 00:18:42.757 "traddr": "10.0.0.1", 00:18:42.757 "trsvcid": "40418" 00:18:42.757 }, 00:18:42.757 "auth": { 00:18:42.757 "state": "completed", 00:18:42.757 "digest": "sha512", 00:18:42.757 "dhgroup": "ffdhe6144" 00:18:42.757 } 00:18:42.757 } 00:18:42.757 ]' 00:18:42.757 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.757 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.757 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.016 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:43.016 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.016 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.016 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.016 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.016 14:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:18:43.583 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.583 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:43.583 14:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.583 14:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.583 14:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.583 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.583 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.583 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:43.583 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:43.841 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:43.841 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.841 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:43.841 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:43.841 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:43.841 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.841 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.841 14:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.841 14:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.841 14:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.841 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.842 14:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.408 00:18:44.408 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.408 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.408 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.408 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.408 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.408 14:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.408 14:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.408 14:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.666 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.666 { 00:18:44.666 "cntlid": 137, 00:18:44.666 "qid": 0, 00:18:44.666 "state": "enabled", 00:18:44.666 "thread": "nvmf_tgt_poll_group_000", 00:18:44.666 "listen_address": { 00:18:44.666 "trtype": "TCP", 00:18:44.666 "adrfam": "IPv4", 00:18:44.666 "traddr": "10.0.0.2", 00:18:44.666 "trsvcid": "4420" 00:18:44.666 }, 00:18:44.666 "peer_address": { 00:18:44.666 "trtype": "TCP", 00:18:44.666 "adrfam": "IPv4", 00:18:44.666 "traddr": "10.0.0.1", 00:18:44.666 "trsvcid": "40456" 00:18:44.666 }, 00:18:44.666 "auth": { 00:18:44.666 "state": "completed", 00:18:44.666 "digest": "sha512", 00:18:44.666 "dhgroup": "ffdhe8192" 00:18:44.666 } 00:18:44.666 } 00:18:44.666 ]' 00:18:44.666 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.666 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.666 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.666 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:44.666 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.666 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.666 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.666 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.924 14:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.491 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.057 00:18:46.057 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.057 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.057 14:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.314 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.314 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.314 14:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.314 14:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.314 14:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.314 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.314 { 00:18:46.314 "cntlid": 139, 00:18:46.314 "qid": 0, 00:18:46.314 "state": "enabled", 00:18:46.314 "thread": "nvmf_tgt_poll_group_000", 00:18:46.314 "listen_address": { 00:18:46.314 "trtype": "TCP", 00:18:46.314 "adrfam": "IPv4", 00:18:46.314 "traddr": "10.0.0.2", 00:18:46.314 "trsvcid": "4420" 00:18:46.314 }, 00:18:46.314 "peer_address": { 00:18:46.314 "trtype": "TCP", 00:18:46.314 "adrfam": "IPv4", 00:18:46.314 "traddr": "10.0.0.1", 00:18:46.314 "trsvcid": "39412" 00:18:46.314 }, 00:18:46.314 "auth": { 00:18:46.314 "state": "completed", 00:18:46.314 "digest": "sha512", 00:18:46.314 "dhgroup": "ffdhe8192" 00:18:46.314 } 00:18:46.314 } 00:18:46.314 ]' 00:18:46.314 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.314 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.314 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.314 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:46.314 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.314 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.314 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.314 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.572 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDNmYzJmMDdiNDAyMmY1YmUyOTdhNDNhNjVjZTBjYjEufC6r: --dhchap-ctrl-secret DHHC-1:02:YTQxNDNlNjljYzUyYzMzMDM1MDhiMzA1OGZjNDgxYWJmNTNlYjczZGFmZWFmNzJhNwn7kg==: 00:18:47.137 14:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.137 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:47.137 14:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.138 14:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.138 14:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.138 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.138 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:47.138 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:47.395 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:47.395 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.395 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:47.395 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:47.395 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:47.396 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.396 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.396 14:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.396 14:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.396 14:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.396 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.396 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.654 00:18:47.654 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.654 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.654 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.913 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.913 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.913 14:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.913 14:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.913 14:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.913 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.913 { 00:18:47.913 "cntlid": 141, 00:18:47.913 "qid": 0, 00:18:47.913 "state": "enabled", 00:18:47.913 "thread": "nvmf_tgt_poll_group_000", 00:18:47.913 "listen_address": { 00:18:47.913 "trtype": "TCP", 00:18:47.913 "adrfam": "IPv4", 00:18:47.913 "traddr": "10.0.0.2", 00:18:47.913 "trsvcid": "4420" 00:18:47.913 }, 00:18:47.913 "peer_address": { 00:18:47.913 "trtype": "TCP", 00:18:47.913 "adrfam": "IPv4", 00:18:47.913 "traddr": "10.0.0.1", 00:18:47.913 "trsvcid": "39440" 00:18:47.913 }, 00:18:47.913 "auth": { 00:18:47.913 "state": "completed", 00:18:47.913 "digest": "sha512", 00:18:47.913 "dhgroup": "ffdhe8192" 00:18:47.913 } 00:18:47.913 } 00:18:47.913 ]' 00:18:47.913 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.913 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.913 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.913 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:47.913 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.171 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.171 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.171 14:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.171 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWVjMjgzYjkyOTJhODRjNjAyMzViY2MzMDFkYjgwZGE4MGNhNGJkZWU2NzRmODg40JZnfg==: --dhchap-ctrl-secret DHHC-1:01:MjQ5OWFmNjczOGY2OThjMjRlM2NiMzIwZTExM2NjZjMkWRVR: 00:18:48.737 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.737 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:48.738 14:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.738 14:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.738 14:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.738 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.738 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:48.738 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:49.016 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:49.016 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.016 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:49.016 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:49.016 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:49.016 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.016 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:49.016 14:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.016 14:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.016 14:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.016 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.016 14:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.583 00:18:49.583 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.583 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.583 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.583 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.583 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.583 14:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.583 14:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.583 14:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.583 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.583 { 00:18:49.583 "cntlid": 143, 00:18:49.583 "qid": 0, 00:18:49.583 "state": "enabled", 00:18:49.583 "thread": "nvmf_tgt_poll_group_000", 00:18:49.583 "listen_address": { 00:18:49.583 "trtype": "TCP", 00:18:49.583 "adrfam": "IPv4", 00:18:49.583 "traddr": "10.0.0.2", 00:18:49.583 "trsvcid": "4420" 00:18:49.583 }, 00:18:49.583 "peer_address": { 00:18:49.583 "trtype": "TCP", 00:18:49.583 "adrfam": "IPv4", 00:18:49.583 "traddr": "10.0.0.1", 00:18:49.583 "trsvcid": "39462" 00:18:49.583 }, 00:18:49.583 "auth": { 00:18:49.583 "state": "completed", 00:18:49.583 "digest": "sha512", 00:18:49.583 "dhgroup": "ffdhe8192" 00:18:49.583 } 00:18:49.583 } 00:18:49.583 ]' 00:18:49.583 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.842 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.842 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.842 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:49.842 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.842 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.842 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.842 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.100 14:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.705 14:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.272 00:18:51.272 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.272 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.272 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.272 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.272 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.272 14:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.272 14:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.530 14:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.530 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.530 { 00:18:51.530 "cntlid": 145, 00:18:51.530 "qid": 0, 00:18:51.530 "state": "enabled", 00:18:51.530 "thread": "nvmf_tgt_poll_group_000", 00:18:51.530 "listen_address": { 00:18:51.530 "trtype": "TCP", 00:18:51.530 "adrfam": "IPv4", 00:18:51.530 "traddr": "10.0.0.2", 00:18:51.530 "trsvcid": "4420" 00:18:51.530 }, 00:18:51.530 "peer_address": { 00:18:51.530 "trtype": "TCP", 00:18:51.530 "adrfam": "IPv4", 00:18:51.530 "traddr": "10.0.0.1", 00:18:51.530 "trsvcid": "39490" 00:18:51.530 }, 00:18:51.530 "auth": { 00:18:51.530 "state": "completed", 00:18:51.530 "digest": "sha512", 00:18:51.530 "dhgroup": "ffdhe8192" 00:18:51.530 } 00:18:51.530 } 00:18:51.530 ]' 00:18:51.530 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.530 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.530 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.530 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.530 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.530 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.530 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.530 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.788 14:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjU0NjEwZGY0YjQyZTk2ZWI0Y2NjMGViOGQ2MDllMzdlNjNjNDE1Mjk3Yjg3OTI5IPkHsQ==: --dhchap-ctrl-secret DHHC-1:03:NzQ4NWE0YTBkYTczOTMxNDE5Y2I0NTg1NDQ3MzQ0NmI4ODFlZGVlNTQ4Y2M2ZmZlNDFkZDI5MmZiZmUyMWViMfWY+bo=: 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:52.354 14:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:52.613 request: 00:18:52.613 { 00:18:52.613 "name": "nvme0", 00:18:52.613 "trtype": "tcp", 00:18:52.613 "traddr": "10.0.0.2", 00:18:52.613 "adrfam": "ipv4", 00:18:52.613 "trsvcid": "4420", 00:18:52.613 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:52.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:52.613 "prchk_reftag": false, 00:18:52.613 "prchk_guard": false, 00:18:52.613 "hdgst": false, 00:18:52.613 "ddgst": false, 00:18:52.613 "dhchap_key": "key2", 00:18:52.613 "method": "bdev_nvme_attach_controller", 00:18:52.613 "req_id": 1 00:18:52.613 } 00:18:52.613 Got JSON-RPC error response 00:18:52.613 response: 00:18:52.613 { 00:18:52.613 "code": -5, 00:18:52.613 "message": "Input/output error" 00:18:52.613 } 00:18:52.613 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:52.613 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:52.613 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:52.613 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:52.613 14:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:52.613 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.613 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.613 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.613 14:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.613 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.613 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.872 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.872 14:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:52.872 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:52.872 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:52.872 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:52.872 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:52.872 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:52.872 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:52.872 14:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:52.872 14:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:53.132 request: 00:18:53.132 { 00:18:53.132 "name": "nvme0", 00:18:53.132 "trtype": "tcp", 00:18:53.132 "traddr": "10.0.0.2", 00:18:53.132 "adrfam": "ipv4", 00:18:53.132 "trsvcid": "4420", 00:18:53.132 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:53.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:53.132 "prchk_reftag": false, 00:18:53.132 "prchk_guard": false, 00:18:53.132 "hdgst": false, 00:18:53.132 "ddgst": false, 00:18:53.132 "dhchap_key": "key1", 00:18:53.132 "dhchap_ctrlr_key": "ckey2", 00:18:53.132 "method": "bdev_nvme_attach_controller", 00:18:53.132 "req_id": 1 00:18:53.132 } 00:18:53.132 Got JSON-RPC error response 00:18:53.132 response: 00:18:53.132 { 00:18:53.132 "code": -5, 00:18:53.132 "message": "Input/output error" 00:18:53.132 } 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.132 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.796 request: 00:18:53.796 { 00:18:53.796 "name": "nvme0", 00:18:53.796 "trtype": "tcp", 00:18:53.796 "traddr": "10.0.0.2", 00:18:53.796 "adrfam": "ipv4", 00:18:53.796 "trsvcid": "4420", 00:18:53.796 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:53.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:53.796 "prchk_reftag": false, 00:18:53.796 "prchk_guard": false, 00:18:53.796 "hdgst": false, 00:18:53.796 "ddgst": false, 00:18:53.796 "dhchap_key": "key1", 00:18:53.796 "dhchap_ctrlr_key": "ckey1", 00:18:53.796 "method": "bdev_nvme_attach_controller", 00:18:53.796 "req_id": 1 00:18:53.796 } 00:18:53.796 Got JSON-RPC error response 00:18:53.796 response: 00:18:53.796 { 00:18:53.796 "code": -5, 00:18:53.796 "message": "Input/output error" 00:18:53.796 } 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2541996 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2541996 ']' 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2541996 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2541996 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2541996' 00:18:53.796 killing process with pid 2541996 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2541996 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2541996 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2562608 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2562608 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2562608 ']' 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.796 14:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.734 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:54.734 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:54.734 14:23:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:54.734 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:54.734 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.734 14:23:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.734 14:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:54.734 14:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2562608 00:18:54.734 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2562608 ']' 00:18:54.734 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.734 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:54.735 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.735 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:54.735 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.994 14:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.562 00:18:55.562 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.562 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.562 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.822 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.822 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.822 14:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.822 14:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.822 14:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.822 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.822 { 00:18:55.822 "cntlid": 1, 00:18:55.822 "qid": 0, 00:18:55.822 "state": "enabled", 00:18:55.822 "thread": "nvmf_tgt_poll_group_000", 00:18:55.822 "listen_address": { 00:18:55.822 "trtype": "TCP", 00:18:55.822 "adrfam": "IPv4", 00:18:55.822 "traddr": "10.0.0.2", 00:18:55.822 "trsvcid": "4420" 00:18:55.822 }, 00:18:55.822 "peer_address": { 00:18:55.822 "trtype": "TCP", 00:18:55.822 "adrfam": "IPv4", 00:18:55.822 "traddr": "10.0.0.1", 00:18:55.822 "trsvcid": "60104" 00:18:55.822 }, 00:18:55.822 "auth": { 00:18:55.822 "state": "completed", 00:18:55.822 "digest": "sha512", 00:18:55.822 "dhgroup": "ffdhe8192" 00:18:55.822 } 00:18:55.822 } 00:18:55.822 ]' 00:18:55.822 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.822 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.822 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.822 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.822 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.822 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.822 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.822 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.081 14:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YzE3Mjc0NDliMjI5MmJlMGY5YmQ5NDk3MzE0ZmVmNDFhNGMzNWVjNWEzZTNjZDMzODEwYTkxYmJhNGY2YWE0ZrO6ffA=: 00:18:56.649 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.649 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:56.649 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.649 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.649 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.649 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:56.649 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.649 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.649 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.649 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:56.649 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.909 request: 00:18:56.909 { 00:18:56.909 "name": "nvme0", 00:18:56.909 "trtype": "tcp", 00:18:56.909 "traddr": "10.0.0.2", 00:18:56.909 "adrfam": "ipv4", 00:18:56.909 "trsvcid": "4420", 00:18:56.909 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:56.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:56.909 "prchk_reftag": false, 00:18:56.909 "prchk_guard": false, 00:18:56.909 "hdgst": false, 00:18:56.909 "ddgst": false, 00:18:56.909 "dhchap_key": "key3", 00:18:56.909 "method": "bdev_nvme_attach_controller", 00:18:56.909 "req_id": 1 00:18:56.909 } 00:18:56.909 Got JSON-RPC error response 00:18:56.909 response: 00:18:56.909 { 00:18:56.909 "code": -5, 00:18:56.909 "message": "Input/output error" 00:18:56.909 } 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:56.909 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:57.169 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.169 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:57.169 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.169 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:57.169 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:57.169 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:57.169 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:57.169 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.169 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.429 request: 00:18:57.429 { 00:18:57.429 "name": "nvme0", 00:18:57.429 "trtype": "tcp", 00:18:57.429 "traddr": "10.0.0.2", 00:18:57.429 "adrfam": "ipv4", 00:18:57.429 "trsvcid": "4420", 00:18:57.429 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:57.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:57.429 "prchk_reftag": false, 00:18:57.429 "prchk_guard": false, 00:18:57.429 "hdgst": false, 00:18:57.429 "ddgst": false, 00:18:57.429 "dhchap_key": "key3", 00:18:57.429 "method": "bdev_nvme_attach_controller", 00:18:57.429 "req_id": 1 00:18:57.429 } 00:18:57.429 Got JSON-RPC error response 00:18:57.429 response: 00:18:57.429 { 00:18:57.429 "code": -5, 00:18:57.429 "message": "Input/output error" 00:18:57.429 } 00:18:57.429 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:57.429 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:57.429 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:57.429 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:57.429 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:57.429 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:57.429 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:57.429 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:57.429 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:57.429 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:57.429 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:57.429 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.688 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.688 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.688 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:57.688 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.688 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.688 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.689 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:57.689 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:57.689 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:57.689 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:57.689 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:57.689 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:57.689 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:57.689 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:57.689 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:57.689 request: 00:18:57.689 { 00:18:57.689 "name": "nvme0", 00:18:57.689 "trtype": "tcp", 00:18:57.689 "traddr": "10.0.0.2", 00:18:57.689 "adrfam": "ipv4", 00:18:57.689 "trsvcid": "4420", 00:18:57.689 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:57.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:57.689 "prchk_reftag": false, 00:18:57.689 "prchk_guard": false, 00:18:57.689 "hdgst": false, 00:18:57.689 "ddgst": false, 00:18:57.689 "dhchap_key": "key0", 00:18:57.689 "dhchap_ctrlr_key": "key1", 00:18:57.689 "method": "bdev_nvme_attach_controller", 00:18:57.689 "req_id": 1 00:18:57.689 } 00:18:57.689 Got JSON-RPC error response 00:18:57.689 response: 00:18:57.689 { 00:18:57.689 "code": -5, 00:18:57.689 "message": "Input/output error" 00:18:57.689 } 00:18:57.689 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:57.689 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:57.689 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:57.689 14:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:57.689 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:57.689 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:57.948 00:18:57.948 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:57.948 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:57.948 14:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.206 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.206 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.206 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.465 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:58.465 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:58.465 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2542030 00:18:58.465 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2542030 ']' 00:18:58.465 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2542030 00:18:58.465 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:58.465 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:58.465 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2542030 00:18:58.465 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:58.465 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:58.465 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2542030' 00:18:58.465 killing process with pid 2542030 00:18:58.465 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2542030 00:18:58.465 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2542030 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:58.724 rmmod nvme_tcp 00:18:58.724 rmmod nvme_fabrics 00:18:58.724 rmmod nvme_keyring 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2562608 ']' 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2562608 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2562608 ']' 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2562608 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2562608 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2562608' 00:18:58.724 killing process with pid 2562608 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2562608 00:18:58.724 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2562608 00:18:58.984 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:58.984 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:58.984 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:58.984 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:58.984 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:58.984 14:23:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.984 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.984 14:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.519 14:23:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:01.519 14:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.fM8 /tmp/spdk.key-sha256.BqE /tmp/spdk.key-sha384.yIR /tmp/spdk.key-sha512.bPc /tmp/spdk.key-sha512.xpf /tmp/spdk.key-sha384.u7L /tmp/spdk.key-sha256.BfD '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:01.519 00:19:01.519 real 2m10.937s 00:19:01.519 user 5m0.558s 00:19:01.519 sys 0m20.747s 00:19:01.519 14:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:01.519 14:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.519 ************************************ 00:19:01.519 END TEST nvmf_auth_target 00:19:01.519 ************************************ 00:19:01.519 14:23:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:01.519 14:23:52 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:01.519 14:23:52 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:01.519 14:23:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:01.519 14:23:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:01.519 14:23:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:01.519 ************************************ 00:19:01.519 START TEST nvmf_bdevio_no_huge 00:19:01.519 ************************************ 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:01.519 * Looking for test storage... 00:19:01.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:01.519 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:01.520 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.520 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.520 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.520 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:01.520 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:01.520 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:01.520 14:23:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:06.792 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:06.792 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:06.792 Found net devices under 0000:86:00.0: cvl_0_0 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:06.792 Found net devices under 0000:86:00.1: cvl_0_1 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:06.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:19:06.792 00:19:06.792 --- 10.0.0.2 ping statistics --- 00:19:06.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.792 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:19:06.792 00:19:06.792 --- 10.0.0.1 ping statistics --- 00:19:06.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.792 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2566868 00:19:06.792 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2566868 00:19:06.793 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:06.793 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2566868 ']' 00:19:06.793 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.793 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:06.793 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.793 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:06.793 14:23:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:06.793 [2024-07-12 14:23:58.603743] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:19:06.793 [2024-07-12 14:23:58.603787] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:06.793 [2024-07-12 14:23:58.667301] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:06.793 [2024-07-12 14:23:58.752266] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.793 [2024-07-12 14:23:58.752304] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.793 [2024-07-12 14:23:58.752311] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.793 [2024-07-12 14:23:58.752318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.793 [2024-07-12 14:23:58.752323] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.793 [2024-07-12 14:23:58.752439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:06.793 [2024-07-12 14:23:58.752551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:06.793 [2024-07-12 14:23:58.752657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.793 [2024-07-12 14:23:58.752658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.732 [2024-07-12 14:23:59.460062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.732 Malloc0 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.732 [2024-07-12 14:23:59.496318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:07.732 { 00:19:07.732 "params": { 00:19:07.732 "name": "Nvme$subsystem", 00:19:07.732 "trtype": "$TEST_TRANSPORT", 00:19:07.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:07.732 "adrfam": "ipv4", 00:19:07.732 "trsvcid": "$NVMF_PORT", 00:19:07.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:07.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:07.732 "hdgst": ${hdgst:-false}, 00:19:07.732 "ddgst": ${ddgst:-false} 00:19:07.732 }, 00:19:07.732 "method": "bdev_nvme_attach_controller" 00:19:07.732 } 00:19:07.732 EOF 00:19:07.732 )") 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:07.732 14:23:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:07.732 "params": { 00:19:07.732 "name": "Nvme1", 00:19:07.732 "trtype": "tcp", 00:19:07.732 "traddr": "10.0.0.2", 00:19:07.732 "adrfam": "ipv4", 00:19:07.732 "trsvcid": "4420", 00:19:07.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.732 "hdgst": false, 00:19:07.732 "ddgst": false 00:19:07.732 }, 00:19:07.732 "method": "bdev_nvme_attach_controller" 00:19:07.732 }' 00:19:07.732 [2024-07-12 14:23:59.543367] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:19:07.732 [2024-07-12 14:23:59.543415] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2567113 ] 00:19:07.732 [2024-07-12 14:23:59.601164] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:07.732 [2024-07-12 14:23:59.687265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.732 [2024-07-12 14:23:59.687358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.732 [2024-07-12 14:23:59.687359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.300 I/O targets: 00:19:08.300 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:08.300 00:19:08.300 00:19:08.300 CUnit - A unit testing framework for C - Version 2.1-3 00:19:08.300 http://cunit.sourceforge.net/ 00:19:08.300 00:19:08.300 00:19:08.300 Suite: bdevio tests on: Nvme1n1 00:19:08.300 Test: blockdev write read block ...passed 00:19:08.300 Test: blockdev write zeroes read block ...passed 00:19:08.300 Test: blockdev write zeroes read no split ...passed 00:19:08.300 Test: blockdev write zeroes read split ...passed 00:19:08.300 Test: blockdev write zeroes read split partial ...passed 00:19:08.300 Test: blockdev reset ...[2024-07-12 14:24:00.192753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:08.300 [2024-07-12 14:24:00.192822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f1300 (9): Bad file descriptor 00:19:08.300 [2024-07-12 14:24:00.212636] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:08.300 passed 00:19:08.300 Test: blockdev write read 8 blocks ...passed 00:19:08.300 Test: blockdev write read size > 128k ...passed 00:19:08.300 Test: blockdev write read invalid size ...passed 00:19:08.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:08.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:08.300 Test: blockdev write read max offset ...passed 00:19:08.559 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:08.559 Test: blockdev writev readv 8 blocks ...passed 00:19:08.559 Test: blockdev writev readv 30 x 1block ...passed 00:19:08.559 Test: blockdev writev readv block ...passed 00:19:08.559 Test: blockdev writev readv size > 128k ...passed 00:19:08.559 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:08.559 Test: blockdev comparev and writev ...[2024-07-12 14:24:00.424323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:08.559 [2024-07-12 14:24:00.424354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:08.559 [2024-07-12 14:24:00.424369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:08.559 [2024-07-12 14:24:00.424376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.559 [2024-07-12 14:24:00.424617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:08.559 [2024-07-12 14:24:00.424628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:08.559 [2024-07-12 14:24:00.424639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:08.559 [2024-07-12 14:24:00.424648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:08.559 [2024-07-12 14:24:00.424885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:08.559 [2024-07-12 14:24:00.424896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:08.559 [2024-07-12 14:24:00.424907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:08.559 [2024-07-12 14:24:00.424915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:08.559 [2024-07-12 14:24:00.425162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:08.559 [2024-07-12 14:24:00.425172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:08.559 [2024-07-12 14:24:00.425184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:08.559 [2024-07-12 14:24:00.425192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:08.559 passed 00:19:08.559 Test: blockdev nvme passthru rw ...passed 00:19:08.559 Test: blockdev nvme passthru vendor specific ...[2024-07-12 14:24:00.506664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:08.559 [2024-07-12 14:24:00.506681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:08.559 [2024-07-12 14:24:00.506789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:08.559 [2024-07-12 14:24:00.506799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:08.559 [2024-07-12 14:24:00.506902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:08.559 [2024-07-12 14:24:00.506911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:08.559 [2024-07-12 14:24:00.507021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:08.559 [2024-07-12 14:24:00.507032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:08.559 passed 00:19:08.559 Test: blockdev nvme admin passthru ...passed 00:19:08.559 Test: blockdev copy ...passed 00:19:08.559 00:19:08.559 Run Summary: Type Total Ran Passed Failed Inactive 00:19:08.559 suites 1 1 n/a 0 0 00:19:08.559 tests 23 23 23 0 0 00:19:08.559 asserts 152 152 152 0 n/a 00:19:08.559 00:19:08.559 Elapsed time = 1.064 seconds 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:09.128 rmmod nvme_tcp 00:19:09.128 rmmod nvme_fabrics 00:19:09.128 rmmod nvme_keyring 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2566868 ']' 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2566868 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2566868 ']' 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2566868 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2566868 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2566868' 00:19:09.128 killing process with pid 2566868 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2566868 00:19:09.128 14:24:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2566868 00:19:09.388 14:24:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:09.388 14:24:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:09.388 14:24:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:09.388 14:24:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:09.388 14:24:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:09.388 14:24:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.388 14:24:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.388 14:24:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.923 14:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:11.923 00:19:11.923 real 0m10.282s 00:19:11.923 user 0m13.875s 00:19:11.923 sys 0m4.884s 00:19:11.923 14:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:11.923 14:24:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.923 ************************************ 00:19:11.923 END TEST nvmf_bdevio_no_huge 00:19:11.923 ************************************ 00:19:11.923 14:24:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:11.923 14:24:03 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:11.923 14:24:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:11.923 14:24:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:11.923 14:24:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:11.923 ************************************ 00:19:11.923 START TEST nvmf_tls 00:19:11.923 ************************************ 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:11.923 * Looking for test storage... 00:19:11.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.923 14:24:03 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:11.924 14:24:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:17.249 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:17.249 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:17.249 Found net devices under 0000:86:00.0: cvl_0_0 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:17.249 Found net devices under 0000:86:00.1: cvl_0_1 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:17.249 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:17.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:19:17.250 00:19:17.250 --- 10.0.0.2 ping statistics --- 00:19:17.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.250 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:17.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:19:17.250 00:19:17.250 --- 10.0.0.1 ping statistics --- 00:19:17.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.250 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2570960 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2570960 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2570960 ']' 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.250 14:24:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.250 [2024-07-12 14:24:08.826395] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:19:17.250 [2024-07-12 14:24:08.826445] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.250 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.250 [2024-07-12 14:24:08.885843] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.250 [2024-07-12 14:24:08.966843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.250 [2024-07-12 14:24:08.966876] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.250 [2024-07-12 14:24:08.966883] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.250 [2024-07-12 14:24:08.966889] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.250 [2024-07-12 14:24:08.966894] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.250 [2024-07-12 14:24:08.966910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.823 14:24:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.823 14:24:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:17.823 14:24:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:17.823 14:24:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:17.823 14:24:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.823 14:24:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.823 14:24:09 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:17.823 14:24:09 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:17.823 true 00:19:18.082 14:24:09 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:18.082 14:24:09 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:18.082 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:18.082 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:18.082 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:18.341 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:18.341 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:18.341 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:18.341 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:18.341 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:18.600 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:18.600 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:18.859 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:18.859 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:18.859 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:18.859 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:18.859 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:18.859 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:18.859 14:24:10 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:19.118 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:19.118 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:19.377 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:19.377 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:19.377 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:19.377 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:19.377 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:19.635 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:19.635 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:19.635 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:19.635 14:24:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:19.635 14:24:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:19.635 14:24:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:19.635 14:24:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:19.635 14:24:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:19.635 14:24:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:19.635 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:19.635 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:19.635 14:24:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:19.636 14:24:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:19.636 14:24:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:19.636 14:24:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:19.636 14:24:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:19.636 14:24:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:19.636 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:19.636 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:19.636 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.RgTrv6rszu 00:19:19.636 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:19.636 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.YD4t98Nxko 00:19:19.636 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:19.636 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:19.636 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.RgTrv6rszu 00:19:19.636 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.YD4t98Nxko 00:19:19.894 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:19.894 14:24:11 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:20.153 14:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.RgTrv6rszu 00:19:20.153 14:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.RgTrv6rszu 00:19:20.153 14:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:20.412 [2024-07-12 14:24:12.220607] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.412 14:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:20.412 14:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:20.670 [2024-07-12 14:24:12.553446] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:20.670 [2024-07-12 14:24:12.553631] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.670 14:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:20.928 malloc0 00:19:20.928 14:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:20.928 14:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RgTrv6rszu 00:19:21.185 [2024-07-12 14:24:13.070970] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:21.185 14:24:13 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.RgTrv6rszu 00:19:21.185 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.391 Initializing NVMe Controllers 00:19:33.391 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:33.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:33.391 Initialization complete. Launching workers. 00:19:33.391 ======================================================== 00:19:33.391 Latency(us) 00:19:33.391 Device Information : IOPS MiB/s Average min max 00:19:33.391 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16483.94 64.39 3883.01 839.89 5139.71 00:19:33.391 ======================================================== 00:19:33.391 Total : 16483.94 64.39 3883.01 839.89 5139.71 00:19:33.391 00:19:33.391 14:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RgTrv6rszu 00:19:33.391 14:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:33.391 14:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:33.391 14:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:33.391 14:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RgTrv6rszu' 00:19:33.391 14:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:33.391 14:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2573673 00:19:33.391 14:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:33.391 14:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:33.391 14:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2573673 /var/tmp/bdevperf.sock 00:19:33.391 14:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2573673 ']' 00:19:33.391 14:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.391 14:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.391 14:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.391 14:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.391 14:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.391 [2024-07-12 14:24:23.235172] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:19:33.391 [2024-07-12 14:24:23.235219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2573673 ] 00:19:33.391 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.391 [2024-07-12 14:24:23.284989] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.391 [2024-07-12 14:24:23.363773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.391 14:24:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.391 14:24:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:33.391 14:24:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RgTrv6rszu 00:19:33.391 [2024-07-12 14:24:24.198330] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.391 [2024-07-12 14:24:24.198403] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:33.391 TLSTESTn1 00:19:33.391 14:24:24 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:33.391 Running I/O for 10 seconds... 00:19:43.366 00:19:43.366 Latency(us) 00:19:43.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.366 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:43.366 Verification LBA range: start 0x0 length 0x2000 00:19:43.366 TLSTESTn1 : 10.02 3675.63 14.36 0.00 0.00 34773.43 4587.52 54024.46 00:19:43.366 =================================================================================================================== 00:19:43.366 Total : 3675.63 14.36 0.00 0.00 34773.43 4587.52 54024.46 00:19:43.366 0 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2573673 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2573673 ']' 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2573673 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2573673 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2573673' 00:19:43.366 killing process with pid 2573673 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2573673 00:19:43.366 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.366 00:19:43.366 Latency(us) 00:19:43.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.366 =================================================================================================================== 00:19:43.366 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:43.366 [2024-07-12 14:24:34.487206] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2573673 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YD4t98Nxko 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YD4t98Nxko 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:43.366 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YD4t98Nxko 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YD4t98Nxko' 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2575560 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2575560 /var/tmp/bdevperf.sock 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2575560 ']' 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:43.367 14:24:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.367 [2024-07-12 14:24:34.716350] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:19:43.367 [2024-07-12 14:24:34.716402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575560 ] 00:19:43.367 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.367 [2024-07-12 14:24:34.764832] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.367 [2024-07-12 14:24:34.836371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.625 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.625 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:43.626 14:24:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YD4t98Nxko 00:19:43.885 [2024-07-12 14:24:35.686787] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.885 [2024-07-12 14:24:35.686855] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:43.885 [2024-07-12 14:24:35.695409] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:43.885 [2024-07-12 14:24:35.695980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6be570 (107): Transport endpoint is not connected 00:19:43.885 [2024-07-12 14:24:35.696973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6be570 (9): Bad file descriptor 00:19:43.885 [2024-07-12 14:24:35.697978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:43.885 [2024-07-12 14:24:35.697988] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:43.885 [2024-07-12 14:24:35.697998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:43.885 request: 00:19:43.885 { 00:19:43.885 "name": "TLSTEST", 00:19:43.885 "trtype": "tcp", 00:19:43.885 "traddr": "10.0.0.2", 00:19:43.885 "adrfam": "ipv4", 00:19:43.885 "trsvcid": "4420", 00:19:43.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.885 "prchk_reftag": false, 00:19:43.885 "prchk_guard": false, 00:19:43.885 "hdgst": false, 00:19:43.885 "ddgst": false, 00:19:43.885 "psk": "/tmp/tmp.YD4t98Nxko", 00:19:43.885 "method": "bdev_nvme_attach_controller", 00:19:43.885 "req_id": 1 00:19:43.885 } 00:19:43.885 Got JSON-RPC error response 00:19:43.885 response: 00:19:43.885 { 00:19:43.885 "code": -5, 00:19:43.885 "message": "Input/output error" 00:19:43.885 } 00:19:43.885 14:24:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2575560 00:19:43.885 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2575560 ']' 00:19:43.885 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2575560 00:19:43.885 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:43.885 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:43.885 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2575560 00:19:43.885 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:43.885 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:43.885 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2575560' 00:19:43.885 killing process with pid 2575560 00:19:43.885 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2575560 00:19:43.885 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.885 00:19:43.885 Latency(us) 00:19:43.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.885 =================================================================================================================== 00:19:43.885 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:43.885 [2024-07-12 14:24:35.757483] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:43.885 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2575560 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RgTrv6rszu 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RgTrv6rszu 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RgTrv6rszu 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RgTrv6rszu' 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2575723 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2575723 /var/tmp/bdevperf.sock 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2575723 ']' 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:44.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.144 14:24:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.145 [2024-07-12 14:24:35.978984] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:19:44.145 [2024-07-12 14:24:35.979031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575723 ] 00:19:44.145 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.145 [2024-07-12 14:24:36.029311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.145 [2024-07-12 14:24:36.107855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.081 14:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.081 14:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:45.081 14:24:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.RgTrv6rszu 00:19:45.081 [2024-07-12 14:24:36.950694] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:45.081 [2024-07-12 14:24:36.950763] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:45.082 [2024-07-12 14:24:36.958528] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:45.082 [2024-07-12 14:24:36.958551] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:45.082 [2024-07-12 14:24:36.958575] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:45.082 [2024-07-12 14:24:36.958951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131f570 (107): Transport endpoint is not connected 00:19:45.082 [2024-07-12 14:24:36.959943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131f570 (9): Bad file descriptor 00:19:45.082 [2024-07-12 14:24:36.960944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:45.082 [2024-07-12 14:24:36.960955] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:45.082 [2024-07-12 14:24:36.960965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:45.082 request: 00:19:45.082 { 00:19:45.082 "name": "TLSTEST", 00:19:45.082 "trtype": "tcp", 00:19:45.082 "traddr": "10.0.0.2", 00:19:45.082 "adrfam": "ipv4", 00:19:45.082 "trsvcid": "4420", 00:19:45.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.082 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:45.082 "prchk_reftag": false, 00:19:45.082 "prchk_guard": false, 00:19:45.082 "hdgst": false, 00:19:45.082 "ddgst": false, 00:19:45.082 "psk": "/tmp/tmp.RgTrv6rszu", 00:19:45.082 "method": "bdev_nvme_attach_controller", 00:19:45.082 "req_id": 1 00:19:45.082 } 00:19:45.082 Got JSON-RPC error response 00:19:45.082 response: 00:19:45.082 { 00:19:45.082 "code": -5, 00:19:45.082 "message": "Input/output error" 00:19:45.082 } 00:19:45.082 14:24:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2575723 00:19:45.082 14:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2575723 ']' 00:19:45.082 14:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2575723 00:19:45.082 14:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:45.082 14:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.082 14:24:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2575723 00:19:45.082 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:45.082 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:45.082 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2575723' 00:19:45.082 killing process with pid 2575723 00:19:45.082 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2575723 00:19:45.082 Received shutdown signal, test time was about 10.000000 seconds 00:19:45.082 00:19:45.082 Latency(us) 00:19:45.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.082 =================================================================================================================== 00:19:45.082 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:45.082 [2024-07-12 14:24:37.021675] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:45.082 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2575723 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RgTrv6rszu 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RgTrv6rszu 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RgTrv6rszu 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RgTrv6rszu' 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2575879 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2575879 /var/tmp/bdevperf.sock 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2575879 ']' 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.341 14:24:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.341 [2024-07-12 14:24:37.241460] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:19:45.341 [2024-07-12 14:24:37.241507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575879 ] 00:19:45.341 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.341 [2024-07-12 14:24:37.291280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.601 [2024-07-12 14:24:37.370535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.169 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.169 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:46.169 14:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RgTrv6rszu 00:19:46.428 [2024-07-12 14:24:38.196012] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:46.428 [2024-07-12 14:24:38.196080] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:46.428 [2024-07-12 14:24:38.201683] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:46.428 [2024-07-12 14:24:38.201705] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:46.428 [2024-07-12 14:24:38.201729] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:46.428 [2024-07-12 14:24:38.202248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd02570 (107): Transport endpoint is not connected 00:19:46.428 [2024-07-12 14:24:38.203241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd02570 (9): Bad file descriptor 00:19:46.428 [2024-07-12 14:24:38.204242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:46.428 [2024-07-12 14:24:38.204253] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:46.428 [2024-07-12 14:24:38.204261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:46.428 request: 00:19:46.428 { 00:19:46.428 "name": "TLSTEST", 00:19:46.428 "trtype": "tcp", 00:19:46.428 "traddr": "10.0.0.2", 00:19:46.428 "adrfam": "ipv4", 00:19:46.428 "trsvcid": "4420", 00:19:46.428 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:46.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.428 "prchk_reftag": false, 00:19:46.428 "prchk_guard": false, 00:19:46.428 "hdgst": false, 00:19:46.428 "ddgst": false, 00:19:46.428 "psk": "/tmp/tmp.RgTrv6rszu", 00:19:46.428 "method": "bdev_nvme_attach_controller", 00:19:46.428 "req_id": 1 00:19:46.428 } 00:19:46.428 Got JSON-RPC error response 00:19:46.428 response: 00:19:46.428 { 00:19:46.428 "code": -5, 00:19:46.428 "message": "Input/output error" 00:19:46.428 } 00:19:46.428 14:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2575879 00:19:46.428 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2575879 ']' 00:19:46.428 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2575879 00:19:46.428 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:46.428 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:46.428 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2575879 00:19:46.428 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:46.428 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:46.428 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2575879' 00:19:46.428 killing process with pid 2575879 00:19:46.428 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2575879 00:19:46.428 Received shutdown signal, test time was about 10.000000 seconds 00:19:46.428 00:19:46.428 Latency(us) 00:19:46.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.428 =================================================================================================================== 00:19:46.428 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:46.428 [2024-07-12 14:24:38.282136] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:46.428 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2575879 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2576060 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2576060 /var/tmp/bdevperf.sock 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2576060 ']' 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.687 14:24:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.687 [2024-07-12 14:24:38.504111] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:19:46.687 [2024-07-12 14:24:38.504161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2576060 ] 00:19:46.687 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.687 [2024-07-12 14:24:38.556134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.687 [2024-07-12 14:24:38.626484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.625 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.625 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:47.625 14:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:47.625 [2024-07-12 14:24:39.464908] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:47.625 [2024-07-12 14:24:39.466347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a8af0 (9): Bad file descriptor 00:19:47.625 [2024-07-12 14:24:39.467346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:47.625 [2024-07-12 14:24:39.467357] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:47.625 [2024-07-12 14:24:39.467365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:47.625 request: 00:19:47.625 { 00:19:47.625 "name": "TLSTEST", 00:19:47.625 "trtype": "tcp", 00:19:47.625 "traddr": "10.0.0.2", 00:19:47.625 "adrfam": "ipv4", 00:19:47.625 "trsvcid": "4420", 00:19:47.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:47.625 "prchk_reftag": false, 00:19:47.625 "prchk_guard": false, 00:19:47.625 "hdgst": false, 00:19:47.625 "ddgst": false, 00:19:47.625 "method": "bdev_nvme_attach_controller", 00:19:47.625 "req_id": 1 00:19:47.625 } 00:19:47.625 Got JSON-RPC error response 00:19:47.625 response: 00:19:47.625 { 00:19:47.625 "code": -5, 00:19:47.625 "message": "Input/output error" 00:19:47.625 } 00:19:47.625 14:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2576060 00:19:47.625 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2576060 ']' 00:19:47.625 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2576060 00:19:47.625 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:47.625 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:47.625 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2576060 00:19:47.625 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:47.625 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:47.625 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2576060' 00:19:47.625 killing process with pid 2576060 00:19:47.625 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2576060 00:19:47.625 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.625 00:19:47.625 Latency(us) 00:19:47.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.625 =================================================================================================================== 00:19:47.625 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:47.625 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2576060 00:19:47.884 14:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:47.884 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:47.884 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.884 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.884 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.884 14:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2570960 00:19:47.884 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2570960 ']' 00:19:47.884 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2570960 00:19:47.884 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:47.884 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:47.884 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2570960 00:19:47.884 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:47.884 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:47.884 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2570960' 00:19:47.884 killing process with pid 2570960 00:19:47.884 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2570960 00:19:47.884 [2024-07-12 14:24:39.748684] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:47.884 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2570960 00:19:48.141 14:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.soGEbjJ5PD 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.soGEbjJ5PD 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2576366 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2576366 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2576366 ']' 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.142 14:24:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.142 [2024-07-12 14:24:40.046425] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:19:48.142 [2024-07-12 14:24:40.046476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.142 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.142 [2024-07-12 14:24:40.107297] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.400 [2024-07-12 14:24:40.186943] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.400 [2024-07-12 14:24:40.186982] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.400 [2024-07-12 14:24:40.186989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.400 [2024-07-12 14:24:40.186995] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.400 [2024-07-12 14:24:40.187001] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.400 [2024-07-12 14:24:40.187018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.968 14:24:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.968 14:24:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:48.968 14:24:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:48.968 14:24:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:48.968 14:24:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.968 14:24:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.968 14:24:40 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.soGEbjJ5PD 00:19:48.968 14:24:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.soGEbjJ5PD 00:19:48.968 14:24:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:49.239 [2024-07-12 14:24:41.046990] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.239 14:24:41 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:49.239 14:24:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:49.505 [2024-07-12 14:24:41.379839] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.505 [2024-07-12 14:24:41.380016] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.505 14:24:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:49.808 malloc0 00:19:49.808 14:24:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:49.808 14:24:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.soGEbjJ5PD 00:19:50.066 [2024-07-12 14:24:41.905539] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:50.067 14:24:41 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.soGEbjJ5PD 00:19:50.067 14:24:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.067 14:24:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:50.067 14:24:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:50.067 14:24:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.soGEbjJ5PD' 00:19:50.067 14:24:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.067 14:24:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.067 14:24:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2576788 00:19:50.067 14:24:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.067 14:24:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2576788 /var/tmp/bdevperf.sock 00:19:50.067 14:24:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2576788 ']' 00:19:50.067 14:24:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.067 14:24:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.067 14:24:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.067 14:24:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.067 14:24:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.067 [2024-07-12 14:24:41.953810] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:19:50.067 [2024-07-12 14:24:41.953852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2576788 ] 00:19:50.067 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.067 [2024-07-12 14:24:42.003972] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.325 [2024-07-12 14:24:42.078790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.325 14:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.325 14:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:50.325 14:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.soGEbjJ5PD 00:19:50.325 [2024-07-12 14:24:42.315551] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.325 [2024-07-12 14:24:42.315630] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:50.583 TLSTESTn1 00:19:50.583 14:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:50.583 Running I/O for 10 seconds... 00:20:00.560 00:20:00.560 Latency(us) 00:20:00.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.560 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:00.560 Verification LBA range: start 0x0 length 0x2000 00:20:00.560 TLSTESTn1 : 10.02 4047.57 15.81 0.00 0.00 31578.42 6610.59 51972.90 00:20:00.560 =================================================================================================================== 00:20:00.560 Total : 4047.57 15.81 0.00 0.00 31578.42 6610.59 51972.90 00:20:00.560 0 00:20:00.560 14:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:00.560 14:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2576788 00:20:00.560 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2576788 ']' 00:20:00.560 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2576788 00:20:00.560 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:00.560 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.560 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2576788 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2576788' 00:20:00.819 killing process with pid 2576788 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2576788 00:20:00.819 Received shutdown signal, test time was about 10.000000 seconds 00:20:00.819 00:20:00.819 Latency(us) 00:20:00.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.819 =================================================================================================================== 00:20:00.819 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.819 [2024-07-12 14:24:52.598369] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2576788 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.soGEbjJ5PD 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.soGEbjJ5PD 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.soGEbjJ5PD 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.soGEbjJ5PD 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.soGEbjJ5PD' 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2578517 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2578517 /var/tmp/bdevperf.sock 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2578517 ']' 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.819 14:24:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.078 [2024-07-12 14:24:52.836578] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:20:01.078 [2024-07-12 14:24:52.836627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2578517 ] 00:20:01.078 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.078 [2024-07-12 14:24:52.889764] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.078 [2024-07-12 14:24:52.963591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.645 14:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.645 14:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:01.645 14:24:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.soGEbjJ5PD 00:20:01.903 [2024-07-12 14:24:53.785962] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.903 [2024-07-12 14:24:53.786016] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:01.903 [2024-07-12 14:24:53.786024] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.soGEbjJ5PD 00:20:01.903 request: 00:20:01.903 { 00:20:01.903 "name": "TLSTEST", 00:20:01.903 "trtype": "tcp", 00:20:01.903 "traddr": "10.0.0.2", 00:20:01.903 "adrfam": "ipv4", 00:20:01.903 "trsvcid": "4420", 00:20:01.903 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.903 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.903 "prchk_reftag": false, 00:20:01.904 "prchk_guard": false, 00:20:01.904 "hdgst": false, 00:20:01.904 "ddgst": false, 00:20:01.904 "psk": "/tmp/tmp.soGEbjJ5PD", 00:20:01.904 "method": "bdev_nvme_attach_controller", 00:20:01.904 "req_id": 1 00:20:01.904 } 00:20:01.904 Got JSON-RPC error response 00:20:01.904 response: 00:20:01.904 { 00:20:01.904 "code": -1, 00:20:01.904 "message": "Operation not permitted" 00:20:01.904 } 00:20:01.904 14:24:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2578517 00:20:01.904 14:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2578517 ']' 00:20:01.904 14:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2578517 00:20:01.904 14:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:01.904 14:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:01.904 14:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2578517 00:20:01.904 14:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:01.904 14:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:01.904 14:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2578517' 00:20:01.904 killing process with pid 2578517 00:20:01.904 14:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2578517 00:20:01.904 Received shutdown signal, test time was about 10.000000 seconds 00:20:01.904 00:20:01.904 Latency(us) 00:20:01.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.904 =================================================================================================================== 00:20:01.904 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:01.904 14:24:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2578517 00:20:02.162 14:24:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:02.162 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:02.162 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:02.162 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:02.162 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:02.162 14:24:54 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2576366 00:20:02.162 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2576366 ']' 00:20:02.162 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2576366 00:20:02.162 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:02.162 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:02.162 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2576366 00:20:02.162 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:02.162 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:02.162 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2576366' 00:20:02.162 killing process with pid 2576366 00:20:02.162 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2576366 00:20:02.162 [2024-07-12 14:24:54.066273] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:02.162 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2576366 00:20:02.421 14:24:54 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:02.421 14:24:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.421 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.421 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.421 14:24:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2578823 00:20:02.421 14:24:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2578823 00:20:02.421 14:24:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:02.421 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2578823 ']' 00:20:02.421 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.421 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.421 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.421 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.421 14:24:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.421 [2024-07-12 14:24:54.315985] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:20:02.421 [2024-07-12 14:24:54.316036] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.421 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.421 [2024-07-12 14:24:54.373987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.680 [2024-07-12 14:24:54.443782] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.680 [2024-07-12 14:24:54.443820] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.680 [2024-07-12 14:24:54.443826] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.680 [2024-07-12 14:24:54.443832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.680 [2024-07-12 14:24:54.443836] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.680 [2024-07-12 14:24:54.443854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.248 14:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.248 14:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:03.248 14:24:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.248 14:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.248 14:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.248 14:24:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.248 14:24:55 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.soGEbjJ5PD 00:20:03.248 14:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:03.248 14:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.soGEbjJ5PD 00:20:03.248 14:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:03.248 14:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:03.248 14:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:03.248 14:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:03.248 14:24:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.soGEbjJ5PD 00:20:03.248 14:24:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.soGEbjJ5PD 00:20:03.248 14:24:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:03.505 [2024-07-12 14:24:55.298810] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.505 14:24:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:03.506 14:24:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:03.764 [2024-07-12 14:24:55.639693] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.764 [2024-07-12 14:24:55.639888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.764 14:24:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:04.021 malloc0 00:20:04.022 14:24:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:04.022 14:24:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.soGEbjJ5PD 00:20:04.280 [2024-07-12 14:24:56.136949] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:04.280 [2024-07-12 14:24:56.136974] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:04.280 [2024-07-12 14:24:56.136995] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:04.280 request: 00:20:04.280 { 00:20:04.280 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.280 "host": "nqn.2016-06.io.spdk:host1", 00:20:04.280 "psk": "/tmp/tmp.soGEbjJ5PD", 00:20:04.280 "method": "nvmf_subsystem_add_host", 00:20:04.280 "req_id": 1 00:20:04.280 } 00:20:04.280 Got JSON-RPC error response 00:20:04.280 response: 00:20:04.280 { 00:20:04.280 "code": -32603, 00:20:04.280 "message": "Internal error" 00:20:04.280 } 00:20:04.280 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:04.280 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:04.280 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:04.280 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:04.280 14:24:56 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2578823 00:20:04.280 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2578823 ']' 00:20:04.280 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2578823 00:20:04.280 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:04.280 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:04.280 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2578823 00:20:04.280 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:04.280 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:04.280 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2578823' 00:20:04.280 killing process with pid 2578823 00:20:04.280 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2578823 00:20:04.280 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2578823 00:20:04.539 14:24:56 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.soGEbjJ5PD 00:20:04.539 14:24:56 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:04.539 14:24:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:04.539 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:04.539 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.539 14:24:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2579141 00:20:04.540 14:24:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:04.540 14:24:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2579141 00:20:04.540 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2579141 ']' 00:20:04.540 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.540 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.540 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.540 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.540 14:24:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.540 [2024-07-12 14:24:56.447264] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:20:04.540 [2024-07-12 14:24:56.447310] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.540 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.540 [2024-07-12 14:24:56.503712] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.798 [2024-07-12 14:24:56.569865] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.798 [2024-07-12 14:24:56.569903] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.798 [2024-07-12 14:24:56.569910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.798 [2024-07-12 14:24:56.569916] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.798 [2024-07-12 14:24:56.569921] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.798 [2024-07-12 14:24:56.569943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.366 14:24:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.366 14:24:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:05.366 14:24:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:05.366 14:24:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:05.366 14:24:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.366 14:24:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.366 14:24:57 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.soGEbjJ5PD 00:20:05.366 14:24:57 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.soGEbjJ5PD 00:20:05.367 14:24:57 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:05.625 [2024-07-12 14:24:57.428882] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.625 14:24:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:05.625 14:24:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:05.884 [2024-07-12 14:24:57.769765] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:05.884 [2024-07-12 14:24:57.769950] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.884 14:24:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:06.142 malloc0 00:20:06.142 14:24:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:06.143 14:24:58 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.soGEbjJ5PD 00:20:06.402 [2024-07-12 14:24:58.283116] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:06.402 14:24:58 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:06.402 14:24:58 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2579487 00:20:06.402 14:24:58 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:06.402 14:24:58 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2579487 /var/tmp/bdevperf.sock 00:20:06.402 14:24:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2579487 ']' 00:20:06.402 14:24:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.402 14:24:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.402 14:24:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.402 14:24:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.402 14:24:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.402 [2024-07-12 14:24:58.326032] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:20:06.402 [2024-07-12 14:24:58.326075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2579487 ] 00:20:06.402 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.402 [2024-07-12 14:24:58.376573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.661 [2024-07-12 14:24:58.454515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.661 14:24:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:06.661 14:24:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:06.661 14:24:58 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.soGEbjJ5PD 00:20:06.920 [2024-07-12 14:24:58.691940] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.920 [2024-07-12 14:24:58.692005] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:06.920 TLSTESTn1 00:20:06.920 14:24:58 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:07.179 14:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:07.179 "subsystems": [ 00:20:07.179 { 00:20:07.179 "subsystem": "keyring", 00:20:07.179 "config": [] 00:20:07.179 }, 00:20:07.179 { 00:20:07.179 "subsystem": "iobuf", 00:20:07.179 "config": [ 00:20:07.179 { 00:20:07.179 "method": "iobuf_set_options", 00:20:07.179 "params": { 00:20:07.179 "small_pool_count": 8192, 00:20:07.179 "large_pool_count": 1024, 00:20:07.179 "small_bufsize": 8192, 00:20:07.179 "large_bufsize": 135168 00:20:07.179 } 00:20:07.179 } 00:20:07.179 ] 00:20:07.179 }, 00:20:07.179 { 00:20:07.179 "subsystem": "sock", 00:20:07.179 "config": [ 00:20:07.179 { 00:20:07.179 "method": "sock_set_default_impl", 00:20:07.179 "params": { 00:20:07.179 "impl_name": "posix" 00:20:07.179 } 00:20:07.179 }, 00:20:07.179 { 00:20:07.179 "method": "sock_impl_set_options", 00:20:07.179 "params": { 00:20:07.179 "impl_name": "ssl", 00:20:07.179 "recv_buf_size": 4096, 00:20:07.180 "send_buf_size": 4096, 00:20:07.180 "enable_recv_pipe": true, 00:20:07.180 "enable_quickack": false, 00:20:07.180 "enable_placement_id": 0, 00:20:07.180 "enable_zerocopy_send_server": true, 00:20:07.180 "enable_zerocopy_send_client": false, 00:20:07.180 "zerocopy_threshold": 0, 00:20:07.180 "tls_version": 0, 00:20:07.180 "enable_ktls": false 00:20:07.180 } 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "method": "sock_impl_set_options", 00:20:07.180 "params": { 00:20:07.180 "impl_name": "posix", 00:20:07.180 "recv_buf_size": 2097152, 00:20:07.180 "send_buf_size": 2097152, 00:20:07.180 "enable_recv_pipe": true, 00:20:07.180 "enable_quickack": false, 00:20:07.180 "enable_placement_id": 0, 00:20:07.180 "enable_zerocopy_send_server": true, 00:20:07.180 "enable_zerocopy_send_client": false, 00:20:07.180 "zerocopy_threshold": 0, 00:20:07.180 "tls_version": 0, 00:20:07.180 "enable_ktls": false 00:20:07.180 } 00:20:07.180 } 00:20:07.180 ] 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "subsystem": "vmd", 00:20:07.180 "config": [] 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "subsystem": "accel", 00:20:07.180 "config": [ 00:20:07.180 { 00:20:07.180 "method": "accel_set_options", 00:20:07.180 "params": { 00:20:07.180 "small_cache_size": 128, 00:20:07.180 "large_cache_size": 16, 00:20:07.180 "task_count": 2048, 00:20:07.180 "sequence_count": 2048, 00:20:07.180 "buf_count": 2048 00:20:07.180 } 00:20:07.180 } 00:20:07.180 ] 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "subsystem": "bdev", 00:20:07.180 "config": [ 00:20:07.180 { 00:20:07.180 "method": "bdev_set_options", 00:20:07.180 "params": { 00:20:07.180 "bdev_io_pool_size": 65535, 00:20:07.180 "bdev_io_cache_size": 256, 00:20:07.180 "bdev_auto_examine": true, 00:20:07.180 "iobuf_small_cache_size": 128, 00:20:07.180 "iobuf_large_cache_size": 16 00:20:07.180 } 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "method": "bdev_raid_set_options", 00:20:07.180 "params": { 00:20:07.180 "process_window_size_kb": 1024 00:20:07.180 } 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "method": "bdev_iscsi_set_options", 00:20:07.180 "params": { 00:20:07.180 "timeout_sec": 30 00:20:07.180 } 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "method": "bdev_nvme_set_options", 00:20:07.180 "params": { 00:20:07.180 "action_on_timeout": "none", 00:20:07.180 "timeout_us": 0, 00:20:07.180 "timeout_admin_us": 0, 00:20:07.180 "keep_alive_timeout_ms": 10000, 00:20:07.180 "arbitration_burst": 0, 00:20:07.180 "low_priority_weight": 0, 00:20:07.180 "medium_priority_weight": 0, 00:20:07.180 "high_priority_weight": 0, 00:20:07.180 "nvme_adminq_poll_period_us": 10000, 00:20:07.180 "nvme_ioq_poll_period_us": 0, 00:20:07.180 "io_queue_requests": 0, 00:20:07.180 "delay_cmd_submit": true, 00:20:07.180 "transport_retry_count": 4, 00:20:07.180 "bdev_retry_count": 3, 00:20:07.180 "transport_ack_timeout": 0, 00:20:07.180 "ctrlr_loss_timeout_sec": 0, 00:20:07.180 "reconnect_delay_sec": 0, 00:20:07.180 "fast_io_fail_timeout_sec": 0, 00:20:07.180 "disable_auto_failback": false, 00:20:07.180 "generate_uuids": false, 00:20:07.180 "transport_tos": 0, 00:20:07.180 "nvme_error_stat": false, 00:20:07.180 "rdma_srq_size": 0, 00:20:07.180 "io_path_stat": false, 00:20:07.180 "allow_accel_sequence": false, 00:20:07.180 "rdma_max_cq_size": 0, 00:20:07.180 "rdma_cm_event_timeout_ms": 0, 00:20:07.180 "dhchap_digests": [ 00:20:07.180 "sha256", 00:20:07.180 "sha384", 00:20:07.180 "sha512" 00:20:07.180 ], 00:20:07.180 "dhchap_dhgroups": [ 00:20:07.180 "null", 00:20:07.180 "ffdhe2048", 00:20:07.180 "ffdhe3072", 00:20:07.180 "ffdhe4096", 00:20:07.180 "ffdhe6144", 00:20:07.180 "ffdhe8192" 00:20:07.180 ] 00:20:07.180 } 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "method": "bdev_nvme_set_hotplug", 00:20:07.180 "params": { 00:20:07.180 "period_us": 100000, 00:20:07.180 "enable": false 00:20:07.180 } 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "method": "bdev_malloc_create", 00:20:07.180 "params": { 00:20:07.180 "name": "malloc0", 00:20:07.180 "num_blocks": 8192, 00:20:07.180 "block_size": 4096, 00:20:07.180 "physical_block_size": 4096, 00:20:07.180 "uuid": "0e48d87a-8238-4a0e-bfa0-2eb40c5082b3", 00:20:07.180 "optimal_io_boundary": 0 00:20:07.180 } 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "method": "bdev_wait_for_examine" 00:20:07.180 } 00:20:07.180 ] 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "subsystem": "nbd", 00:20:07.180 "config": [] 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "subsystem": "scheduler", 00:20:07.180 "config": [ 00:20:07.180 { 00:20:07.180 "method": "framework_set_scheduler", 00:20:07.180 "params": { 00:20:07.180 "name": "static" 00:20:07.180 } 00:20:07.180 } 00:20:07.180 ] 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "subsystem": "nvmf", 00:20:07.180 "config": [ 00:20:07.180 { 00:20:07.180 "method": "nvmf_set_config", 00:20:07.180 "params": { 00:20:07.180 "discovery_filter": "match_any", 00:20:07.180 "admin_cmd_passthru": { 00:20:07.180 "identify_ctrlr": false 00:20:07.180 } 00:20:07.180 } 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "method": "nvmf_set_max_subsystems", 00:20:07.180 "params": { 00:20:07.180 "max_subsystems": 1024 00:20:07.180 } 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "method": "nvmf_set_crdt", 00:20:07.180 "params": { 00:20:07.180 "crdt1": 0, 00:20:07.180 "crdt2": 0, 00:20:07.180 "crdt3": 0 00:20:07.180 } 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "method": "nvmf_create_transport", 00:20:07.180 "params": { 00:20:07.180 "trtype": "TCP", 00:20:07.180 "max_queue_depth": 128, 00:20:07.180 "max_io_qpairs_per_ctrlr": 127, 00:20:07.180 "in_capsule_data_size": 4096, 00:20:07.180 "max_io_size": 131072, 00:20:07.180 "io_unit_size": 131072, 00:20:07.180 "max_aq_depth": 128, 00:20:07.180 "num_shared_buffers": 511, 00:20:07.180 "buf_cache_size": 4294967295, 00:20:07.180 "dif_insert_or_strip": false, 00:20:07.180 "zcopy": false, 00:20:07.180 "c2h_success": false, 00:20:07.180 "sock_priority": 0, 00:20:07.180 "abort_timeout_sec": 1, 00:20:07.180 "ack_timeout": 0, 00:20:07.180 "data_wr_pool_size": 0 00:20:07.180 } 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "method": "nvmf_create_subsystem", 00:20:07.180 "params": { 00:20:07.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.180 "allow_any_host": false, 00:20:07.180 "serial_number": "SPDK00000000000001", 00:20:07.180 "model_number": "SPDK bdev Controller", 00:20:07.180 "max_namespaces": 10, 00:20:07.180 "min_cntlid": 1, 00:20:07.180 "max_cntlid": 65519, 00:20:07.180 "ana_reporting": false 00:20:07.180 } 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "method": "nvmf_subsystem_add_host", 00:20:07.180 "params": { 00:20:07.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.180 "host": "nqn.2016-06.io.spdk:host1", 00:20:07.180 "psk": "/tmp/tmp.soGEbjJ5PD" 00:20:07.180 } 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "method": "nvmf_subsystem_add_ns", 00:20:07.180 "params": { 00:20:07.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.180 "namespace": { 00:20:07.180 "nsid": 1, 00:20:07.180 "bdev_name": "malloc0", 00:20:07.180 "nguid": "0E48D87A82384A0EBFA02EB40C5082B3", 00:20:07.180 "uuid": "0e48d87a-8238-4a0e-bfa0-2eb40c5082b3", 00:20:07.180 "no_auto_visible": false 00:20:07.180 } 00:20:07.180 } 00:20:07.180 }, 00:20:07.180 { 00:20:07.180 "method": "nvmf_subsystem_add_listener", 00:20:07.180 "params": { 00:20:07.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.180 "listen_address": { 00:20:07.180 "trtype": "TCP", 00:20:07.180 "adrfam": "IPv4", 00:20:07.180 "traddr": "10.0.0.2", 00:20:07.181 "trsvcid": "4420" 00:20:07.181 }, 00:20:07.181 "secure_channel": true 00:20:07.181 } 00:20:07.181 } 00:20:07.181 ] 00:20:07.181 } 00:20:07.181 ] 00:20:07.181 }' 00:20:07.181 14:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:07.440 14:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:07.440 "subsystems": [ 00:20:07.440 { 00:20:07.440 "subsystem": "keyring", 00:20:07.440 "config": [] 00:20:07.440 }, 00:20:07.440 { 00:20:07.440 "subsystem": "iobuf", 00:20:07.440 "config": [ 00:20:07.440 { 00:20:07.440 "method": "iobuf_set_options", 00:20:07.440 "params": { 00:20:07.440 "small_pool_count": 8192, 00:20:07.440 "large_pool_count": 1024, 00:20:07.440 "small_bufsize": 8192, 00:20:07.440 "large_bufsize": 135168 00:20:07.440 } 00:20:07.440 } 00:20:07.440 ] 00:20:07.440 }, 00:20:07.440 { 00:20:07.440 "subsystem": "sock", 00:20:07.440 "config": [ 00:20:07.440 { 00:20:07.440 "method": "sock_set_default_impl", 00:20:07.440 "params": { 00:20:07.440 "impl_name": "posix" 00:20:07.440 } 00:20:07.440 }, 00:20:07.440 { 00:20:07.440 "method": "sock_impl_set_options", 00:20:07.440 "params": { 00:20:07.440 "impl_name": "ssl", 00:20:07.440 "recv_buf_size": 4096, 00:20:07.440 "send_buf_size": 4096, 00:20:07.440 "enable_recv_pipe": true, 00:20:07.440 "enable_quickack": false, 00:20:07.440 "enable_placement_id": 0, 00:20:07.440 "enable_zerocopy_send_server": true, 00:20:07.440 "enable_zerocopy_send_client": false, 00:20:07.440 "zerocopy_threshold": 0, 00:20:07.440 "tls_version": 0, 00:20:07.440 "enable_ktls": false 00:20:07.440 } 00:20:07.440 }, 00:20:07.440 { 00:20:07.440 "method": "sock_impl_set_options", 00:20:07.440 "params": { 00:20:07.440 "impl_name": "posix", 00:20:07.440 "recv_buf_size": 2097152, 00:20:07.440 "send_buf_size": 2097152, 00:20:07.440 "enable_recv_pipe": true, 00:20:07.440 "enable_quickack": false, 00:20:07.440 "enable_placement_id": 0, 00:20:07.440 "enable_zerocopy_send_server": true, 00:20:07.440 "enable_zerocopy_send_client": false, 00:20:07.440 "zerocopy_threshold": 0, 00:20:07.440 "tls_version": 0, 00:20:07.440 "enable_ktls": false 00:20:07.440 } 00:20:07.440 } 00:20:07.440 ] 00:20:07.440 }, 00:20:07.440 { 00:20:07.440 "subsystem": "vmd", 00:20:07.440 "config": [] 00:20:07.440 }, 00:20:07.440 { 00:20:07.440 "subsystem": "accel", 00:20:07.440 "config": [ 00:20:07.440 { 00:20:07.440 "method": "accel_set_options", 00:20:07.440 "params": { 00:20:07.440 "small_cache_size": 128, 00:20:07.440 "large_cache_size": 16, 00:20:07.440 "task_count": 2048, 00:20:07.440 "sequence_count": 2048, 00:20:07.440 "buf_count": 2048 00:20:07.440 } 00:20:07.440 } 00:20:07.440 ] 00:20:07.440 }, 00:20:07.440 { 00:20:07.440 "subsystem": "bdev", 00:20:07.440 "config": [ 00:20:07.440 { 00:20:07.440 "method": "bdev_set_options", 00:20:07.440 "params": { 00:20:07.440 "bdev_io_pool_size": 65535, 00:20:07.440 "bdev_io_cache_size": 256, 00:20:07.440 "bdev_auto_examine": true, 00:20:07.440 "iobuf_small_cache_size": 128, 00:20:07.440 "iobuf_large_cache_size": 16 00:20:07.440 } 00:20:07.440 }, 00:20:07.440 { 00:20:07.440 "method": "bdev_raid_set_options", 00:20:07.440 "params": { 00:20:07.440 "process_window_size_kb": 1024 00:20:07.440 } 00:20:07.440 }, 00:20:07.440 { 00:20:07.440 "method": "bdev_iscsi_set_options", 00:20:07.440 "params": { 00:20:07.440 "timeout_sec": 30 00:20:07.440 } 00:20:07.440 }, 00:20:07.440 { 00:20:07.440 "method": "bdev_nvme_set_options", 00:20:07.440 "params": { 00:20:07.440 "action_on_timeout": "none", 00:20:07.440 "timeout_us": 0, 00:20:07.440 "timeout_admin_us": 0, 00:20:07.440 "keep_alive_timeout_ms": 10000, 00:20:07.440 "arbitration_burst": 0, 00:20:07.440 "low_priority_weight": 0, 00:20:07.440 "medium_priority_weight": 0, 00:20:07.441 "high_priority_weight": 0, 00:20:07.441 "nvme_adminq_poll_period_us": 10000, 00:20:07.441 "nvme_ioq_poll_period_us": 0, 00:20:07.441 "io_queue_requests": 512, 00:20:07.441 "delay_cmd_submit": true, 00:20:07.441 "transport_retry_count": 4, 00:20:07.441 "bdev_retry_count": 3, 00:20:07.441 "transport_ack_timeout": 0, 00:20:07.441 "ctrlr_loss_timeout_sec": 0, 00:20:07.441 "reconnect_delay_sec": 0, 00:20:07.441 "fast_io_fail_timeout_sec": 0, 00:20:07.441 "disable_auto_failback": false, 00:20:07.441 "generate_uuids": false, 00:20:07.441 "transport_tos": 0, 00:20:07.441 "nvme_error_stat": false, 00:20:07.441 "rdma_srq_size": 0, 00:20:07.441 "io_path_stat": false, 00:20:07.441 "allow_accel_sequence": false, 00:20:07.441 "rdma_max_cq_size": 0, 00:20:07.441 "rdma_cm_event_timeout_ms": 0, 00:20:07.441 "dhchap_digests": [ 00:20:07.441 "sha256", 00:20:07.441 "sha384", 00:20:07.441 "sha512" 00:20:07.441 ], 00:20:07.441 "dhchap_dhgroups": [ 00:20:07.441 "null", 00:20:07.441 "ffdhe2048", 00:20:07.441 "ffdhe3072", 00:20:07.441 "ffdhe4096", 00:20:07.441 "ffdhe6144", 00:20:07.441 "ffdhe8192" 00:20:07.441 ] 00:20:07.441 } 00:20:07.441 }, 00:20:07.441 { 00:20:07.441 "method": "bdev_nvme_attach_controller", 00:20:07.441 "params": { 00:20:07.441 "name": "TLSTEST", 00:20:07.441 "trtype": "TCP", 00:20:07.441 "adrfam": "IPv4", 00:20:07.441 "traddr": "10.0.0.2", 00:20:07.441 "trsvcid": "4420", 00:20:07.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.441 "prchk_reftag": false, 00:20:07.441 "prchk_guard": false, 00:20:07.441 "ctrlr_loss_timeout_sec": 0, 00:20:07.441 "reconnect_delay_sec": 0, 00:20:07.441 "fast_io_fail_timeout_sec": 0, 00:20:07.441 "psk": "/tmp/tmp.soGEbjJ5PD", 00:20:07.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.441 "hdgst": false, 00:20:07.441 "ddgst": false 00:20:07.441 } 00:20:07.441 }, 00:20:07.441 { 00:20:07.441 "method": "bdev_nvme_set_hotplug", 00:20:07.441 "params": { 00:20:07.441 "period_us": 100000, 00:20:07.441 "enable": false 00:20:07.441 } 00:20:07.441 }, 00:20:07.441 { 00:20:07.441 "method": "bdev_wait_for_examine" 00:20:07.441 } 00:20:07.441 ] 00:20:07.441 }, 00:20:07.441 { 00:20:07.441 "subsystem": "nbd", 00:20:07.441 "config": [] 00:20:07.441 } 00:20:07.441 ] 00:20:07.441 }' 00:20:07.441 14:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2579487 00:20:07.441 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2579487 ']' 00:20:07.441 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2579487 00:20:07.441 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:07.441 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.441 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2579487 00:20:07.441 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:07.441 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:07.441 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2579487' 00:20:07.441 killing process with pid 2579487 00:20:07.441 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2579487 00:20:07.441 Received shutdown signal, test time was about 10.000000 seconds 00:20:07.441 00:20:07.441 Latency(us) 00:20:07.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.441 =================================================================================================================== 00:20:07.441 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:07.441 [2024-07-12 14:24:59.320765] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:07.441 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2579487 00:20:07.700 14:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2579141 00:20:07.700 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2579141 ']' 00:20:07.700 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2579141 00:20:07.700 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:07.700 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.700 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2579141 00:20:07.700 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:07.700 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:07.700 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2579141' 00:20:07.700 killing process with pid 2579141 00:20:07.700 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2579141 00:20:07.700 [2024-07-12 14:24:59.536570] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:07.700 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2579141 00:20:07.960 14:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:07.960 14:24:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:07.960 "subsystems": [ 00:20:07.960 { 00:20:07.960 "subsystem": "keyring", 00:20:07.960 "config": [] 00:20:07.960 }, 00:20:07.960 { 00:20:07.960 "subsystem": "iobuf", 00:20:07.960 "config": [ 00:20:07.960 { 00:20:07.960 "method": "iobuf_set_options", 00:20:07.960 "params": { 00:20:07.960 "small_pool_count": 8192, 00:20:07.960 "large_pool_count": 1024, 00:20:07.960 "small_bufsize": 8192, 00:20:07.960 "large_bufsize": 135168 00:20:07.960 } 00:20:07.960 } 00:20:07.960 ] 00:20:07.960 }, 00:20:07.960 { 00:20:07.960 "subsystem": "sock", 00:20:07.960 "config": [ 00:20:07.960 { 00:20:07.960 "method": "sock_set_default_impl", 00:20:07.960 "params": { 00:20:07.960 "impl_name": "posix" 00:20:07.960 } 00:20:07.960 }, 00:20:07.960 { 00:20:07.960 "method": "sock_impl_set_options", 00:20:07.960 "params": { 00:20:07.960 "impl_name": "ssl", 00:20:07.961 "recv_buf_size": 4096, 00:20:07.961 "send_buf_size": 4096, 00:20:07.961 "enable_recv_pipe": true, 00:20:07.961 "enable_quickack": false, 00:20:07.961 "enable_placement_id": 0, 00:20:07.961 "enable_zerocopy_send_server": true, 00:20:07.961 "enable_zerocopy_send_client": false, 00:20:07.961 "zerocopy_threshold": 0, 00:20:07.961 "tls_version": 0, 00:20:07.961 "enable_ktls": false 00:20:07.961 } 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "method": "sock_impl_set_options", 00:20:07.961 "params": { 00:20:07.961 "impl_name": "posix", 00:20:07.961 "recv_buf_size": 2097152, 00:20:07.961 "send_buf_size": 2097152, 00:20:07.961 "enable_recv_pipe": true, 00:20:07.961 "enable_quickack": false, 00:20:07.961 "enable_placement_id": 0, 00:20:07.961 "enable_zerocopy_send_server": true, 00:20:07.961 "enable_zerocopy_send_client": false, 00:20:07.961 "zerocopy_threshold": 0, 00:20:07.961 "tls_version": 0, 00:20:07.961 "enable_ktls": false 00:20:07.961 } 00:20:07.961 } 00:20:07.961 ] 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "subsystem": "vmd", 00:20:07.961 "config": [] 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "subsystem": "accel", 00:20:07.961 "config": [ 00:20:07.961 { 00:20:07.961 "method": "accel_set_options", 00:20:07.961 "params": { 00:20:07.961 "small_cache_size": 128, 00:20:07.961 "large_cache_size": 16, 00:20:07.961 "task_count": 2048, 00:20:07.961 "sequence_count": 2048, 00:20:07.961 "buf_count": 2048 00:20:07.961 } 00:20:07.961 } 00:20:07.961 ] 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "subsystem": "bdev", 00:20:07.961 "config": [ 00:20:07.961 { 00:20:07.961 "method": "bdev_set_options", 00:20:07.961 "params": { 00:20:07.961 "bdev_io_pool_size": 65535, 00:20:07.961 "bdev_io_cache_size": 256, 00:20:07.961 "bdev_auto_examine": true, 00:20:07.961 "iobuf_small_cache_size": 128, 00:20:07.961 "iobuf_large_cache_size": 16 00:20:07.961 } 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "method": "bdev_raid_set_options", 00:20:07.961 "params": { 00:20:07.961 "process_window_size_kb": 1024 00:20:07.961 } 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "method": "bdev_iscsi_set_options", 00:20:07.961 "params": { 00:20:07.961 "timeout_sec": 30 00:20:07.961 } 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "method": "bdev_nvme_set_options", 00:20:07.961 "params": { 00:20:07.961 "action_on_timeout": "none", 00:20:07.961 "timeout_us": 0, 00:20:07.961 "timeout_admin_us": 0, 00:20:07.961 "keep_alive_timeout_ms": 10000, 00:20:07.961 "arbitration_burst": 0, 00:20:07.961 "low_priority_weight": 0, 00:20:07.961 "medium_priority_weight": 0, 00:20:07.961 "high_priority_weight": 0, 00:20:07.961 "nvme_adminq_poll_period_us": 10000, 00:20:07.961 "nvme_ioq_poll_period_us": 0, 00:20:07.961 "io_queue_requests": 0, 00:20:07.961 "delay_cmd_submit": true, 00:20:07.961 "transport_retry_count": 4, 00:20:07.961 "bdev_retry_count": 3, 00:20:07.961 "transport_ack_timeout": 0, 00:20:07.961 "ctrlr_loss_timeout_sec": 0, 00:20:07.961 "reconnect_delay_sec": 0, 00:20:07.961 "fast_io_fail_timeout_sec": 0, 00:20:07.961 "disable_auto_failback": false, 00:20:07.961 "generate_uuids": false, 00:20:07.961 "transport_tos": 0, 00:20:07.961 "nvme_error_stat": false, 00:20:07.961 "rdma_srq_size": 0, 00:20:07.961 "io_path_stat": false, 00:20:07.961 "allow_accel_sequence": false, 00:20:07.961 "rdma_max_cq_size": 0, 00:20:07.961 "rdma_cm_event_timeout_ms": 0, 00:20:07.961 "dhchap_digests": [ 00:20:07.961 "sha256", 00:20:07.961 "sha384", 00:20:07.961 "sha512" 00:20:07.961 ], 00:20:07.961 "dhchap_dhgroups": [ 00:20:07.961 "null", 00:20:07.961 "ffdhe2048", 00:20:07.961 "ffdhe3072", 00:20:07.961 "ffdhe4096", 00:20:07.961 "ffdhe6144", 00:20:07.961 "ffdhe8192" 00:20:07.961 ] 00:20:07.961 } 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "method": "bdev_nvme_set_hotplug", 00:20:07.961 "params": { 00:20:07.961 "period_us": 100000, 00:20:07.961 "enable": false 00:20:07.961 } 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "method": "bdev_malloc_create", 00:20:07.961 "params": { 00:20:07.961 "name": "malloc0", 00:20:07.961 "num_blocks": 8192, 00:20:07.961 "block_size": 4096, 00:20:07.961 "physical_block_size": 4096, 00:20:07.961 "uuid": "0e48d87a-8238-4a0e-bfa0-2eb40c5082b3", 00:20:07.961 "optimal_io_boundary": 0 00:20:07.961 } 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "method": "bdev_wait_for_examine" 00:20:07.961 } 00:20:07.961 ] 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "subsystem": "nbd", 00:20:07.961 "config": [] 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "subsystem": "scheduler", 00:20:07.961 "config": [ 00:20:07.961 { 00:20:07.961 "method": "framework_set_scheduler", 00:20:07.961 "params": { 00:20:07.961 "name": "static" 00:20:07.961 } 00:20:07.961 } 00:20:07.961 ] 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "subsystem": "nvmf", 00:20:07.961 "config": [ 00:20:07.961 { 00:20:07.961 "method": "nvmf_set_config", 00:20:07.961 "params": { 00:20:07.961 "discovery_filter": "match_any", 00:20:07.961 "admin_cmd_passthru": { 00:20:07.961 "identify_ctrlr": false 00:20:07.961 } 00:20:07.961 } 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "method": "nvmf_set_max_subsystems", 00:20:07.961 "params": { 00:20:07.961 "max_subsystems": 1024 00:20:07.961 } 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "method": "nvmf_set_crdt", 00:20:07.961 "params": { 00:20:07.961 "crdt1": 0, 00:20:07.961 "crdt2": 0, 00:20:07.961 "crdt3": 0 00:20:07.961 } 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "method": "nvmf_create_transport", 00:20:07.961 "params": { 00:20:07.961 "trtype": "TCP", 00:20:07.961 "max_queue_depth": 128, 00:20:07.961 "max_io_qpairs_per_ctrlr": 127, 00:20:07.961 "in_capsule_data_size": 4096, 00:20:07.961 "max_io_size": 131072, 00:20:07.961 "io_unit_size": 131072, 00:20:07.961 "max_aq_depth": 128, 00:20:07.961 "num_shared_buffers": 511, 00:20:07.961 "buf_cache_size": 4294967295, 00:20:07.961 "dif_insert_or_strip": false, 00:20:07.961 "zcopy": false, 00:20:07.961 "c2h_success": false, 00:20:07.961 "sock_priority": 0, 00:20:07.961 "abort_timeout_sec": 1, 00:20:07.961 "ack_timeout": 0, 00:20:07.961 "data_wr_pool_size": 0 00:20:07.961 } 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "method": "nvmf_create_subsystem", 00:20:07.961 "params": { 00:20:07.961 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.961 "allow_any_host": false, 00:20:07.961 "serial_number": "SPDK00000000000001", 00:20:07.961 "model_number": "SPDK bdev Controller", 00:20:07.961 "max_namespaces": 10, 00:20:07.961 "min_cntlid": 1, 00:20:07.961 "max_cntlid": 65519, 00:20:07.961 "ana_reporting": false 00:20:07.961 } 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "method": "nvmf_subsystem_add_host", 00:20:07.961 "params": { 00:20:07.961 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.961 "host": "nqn.2016-06.io.spdk:host1", 00:20:07.961 "psk": "/tmp/tmp.soGEbjJ5PD" 00:20:07.961 } 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "method": "nvmf_subsystem_add_ns", 00:20:07.961 "params": { 00:20:07.961 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.961 "namespace": { 00:20:07.961 "nsid": 1, 00:20:07.961 "bdev_name": "malloc0", 00:20:07.961 "nguid": "0E48D87A82384A0EBFA02EB40C5082B3", 00:20:07.961 "uuid": "0e48d87a-8238-4a0e-bfa0-2eb40c5082b3", 00:20:07.961 "no_auto_visible": false 00:20:07.961 } 00:20:07.961 } 00:20:07.961 }, 00:20:07.961 { 00:20:07.961 "method": "nvmf_subsystem_add_listener", 00:20:07.961 "params": { 00:20:07.961 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.961 "listen_address": { 00:20:07.961 "trtype": "TCP", 00:20:07.961 "adrfam": "IPv4", 00:20:07.961 "traddr": "10.0.0.2", 00:20:07.961 "trsvcid": "4420" 00:20:07.961 }, 00:20:07.961 "secure_channel": true 00:20:07.961 } 00:20:07.961 } 00:20:07.961 ] 00:20:07.961 } 00:20:07.961 ] 00:20:07.961 }' 00:20:07.962 14:24:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:07.962 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:07.962 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.962 14:24:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2579755 00:20:07.962 14:24:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2579755 00:20:07.962 14:24:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:07.962 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2579755 ']' 00:20:07.962 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.962 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:07.962 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.962 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:07.962 14:24:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.962 [2024-07-12 14:24:59.777209] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:20:07.962 [2024-07-12 14:24:59.777254] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.962 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.962 [2024-07-12 14:24:59.832592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.962 [2024-07-12 14:24:59.910525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.962 [2024-07-12 14:24:59.910563] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.962 [2024-07-12 14:24:59.910570] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.962 [2024-07-12 14:24:59.910576] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.962 [2024-07-12 14:24:59.910582] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.962 [2024-07-12 14:24:59.910631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.222 [2024-07-12 14:25:00.114341] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.222 [2024-07-12 14:25:00.130312] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:08.222 [2024-07-12 14:25:00.146364] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:08.222 [2024-07-12 14:25:00.154571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.791 14:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.791 14:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:08.791 14:25:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:08.791 14:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:08.791 14:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.791 14:25:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.791 14:25:00 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2579892 00:20:08.791 14:25:00 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2579892 /var/tmp/bdevperf.sock 00:20:08.791 14:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2579892 ']' 00:20:08.791 14:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.791 14:25:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:08.791 14:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.791 14:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.791 14:25:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:08.791 "subsystems": [ 00:20:08.791 { 00:20:08.791 "subsystem": "keyring", 00:20:08.791 "config": [] 00:20:08.791 }, 00:20:08.791 { 00:20:08.791 "subsystem": "iobuf", 00:20:08.791 "config": [ 00:20:08.791 { 00:20:08.791 "method": "iobuf_set_options", 00:20:08.791 "params": { 00:20:08.791 "small_pool_count": 8192, 00:20:08.791 "large_pool_count": 1024, 00:20:08.791 "small_bufsize": 8192, 00:20:08.791 "large_bufsize": 135168 00:20:08.791 } 00:20:08.791 } 00:20:08.791 ] 00:20:08.791 }, 00:20:08.791 { 00:20:08.791 "subsystem": "sock", 00:20:08.791 "config": [ 00:20:08.791 { 00:20:08.791 "method": "sock_set_default_impl", 00:20:08.791 "params": { 00:20:08.791 "impl_name": "posix" 00:20:08.791 } 00:20:08.791 }, 00:20:08.791 { 00:20:08.791 "method": "sock_impl_set_options", 00:20:08.791 "params": { 00:20:08.791 "impl_name": "ssl", 00:20:08.791 "recv_buf_size": 4096, 00:20:08.791 "send_buf_size": 4096, 00:20:08.791 "enable_recv_pipe": true, 00:20:08.791 "enable_quickack": false, 00:20:08.791 "enable_placement_id": 0, 00:20:08.791 "enable_zerocopy_send_server": true, 00:20:08.791 "enable_zerocopy_send_client": false, 00:20:08.791 "zerocopy_threshold": 0, 00:20:08.791 "tls_version": 0, 00:20:08.791 "enable_ktls": false 00:20:08.791 } 00:20:08.791 }, 00:20:08.791 { 00:20:08.791 "method": "sock_impl_set_options", 00:20:08.791 "params": { 00:20:08.791 "impl_name": "posix", 00:20:08.791 "recv_buf_size": 2097152, 00:20:08.791 "send_buf_size": 2097152, 00:20:08.791 "enable_recv_pipe": true, 00:20:08.791 "enable_quickack": false, 00:20:08.791 "enable_placement_id": 0, 00:20:08.791 "enable_zerocopy_send_server": true, 00:20:08.791 "enable_zerocopy_send_client": false, 00:20:08.791 "zerocopy_threshold": 0, 00:20:08.791 "tls_version": 0, 00:20:08.791 "enable_ktls": false 00:20:08.791 } 00:20:08.791 } 00:20:08.791 ] 00:20:08.791 }, 00:20:08.791 { 00:20:08.791 "subsystem": "vmd", 00:20:08.791 "config": [] 00:20:08.791 }, 00:20:08.791 { 00:20:08.791 "subsystem": "accel", 00:20:08.791 "config": [ 00:20:08.791 { 00:20:08.791 "method": "accel_set_options", 00:20:08.791 "params": { 00:20:08.791 "small_cache_size": 128, 00:20:08.791 "large_cache_size": 16, 00:20:08.791 "task_count": 2048, 00:20:08.791 "sequence_count": 2048, 00:20:08.791 "buf_count": 2048 00:20:08.791 } 00:20:08.791 } 00:20:08.791 ] 00:20:08.791 }, 00:20:08.791 { 00:20:08.791 "subsystem": "bdev", 00:20:08.791 "config": [ 00:20:08.791 { 00:20:08.791 "method": "bdev_set_options", 00:20:08.791 "params": { 00:20:08.791 "bdev_io_pool_size": 65535, 00:20:08.791 "bdev_io_cache_size": 256, 00:20:08.791 "bdev_auto_examine": true, 00:20:08.791 "iobuf_small_cache_size": 128, 00:20:08.791 "iobuf_large_cache_size": 16 00:20:08.791 } 00:20:08.791 }, 00:20:08.791 { 00:20:08.791 "method": "bdev_raid_set_options", 00:20:08.791 "params": { 00:20:08.791 "process_window_size_kb": 1024 00:20:08.791 } 00:20:08.791 }, 00:20:08.791 { 00:20:08.791 "method": "bdev_iscsi_set_options", 00:20:08.791 "params": { 00:20:08.791 "timeout_sec": 30 00:20:08.791 } 00:20:08.791 }, 00:20:08.791 { 00:20:08.791 "method": "bdev_nvme_set_options", 00:20:08.791 "params": { 00:20:08.791 "action_on_timeout": "none", 00:20:08.791 "timeout_us": 0, 00:20:08.791 "timeout_admin_us": 0, 00:20:08.791 "keep_alive_timeout_ms": 10000, 00:20:08.791 "arbitration_burst": 0, 00:20:08.791 "low_priority_weight": 0, 00:20:08.791 "medium_priority_weight": 0, 00:20:08.791 "high_priority_weight": 0, 00:20:08.791 "nvme_adminq_poll_period_us": 10000, 00:20:08.791 "nvme_ioq_poll_period_us": 0, 00:20:08.791 "io_queue_requests": 512, 00:20:08.791 "delay_cmd_submit": true, 00:20:08.791 "transport_retry_count": 4, 00:20:08.791 "bdev_retry_count": 3, 00:20:08.791 "transport_ack_timeout": 0, 00:20:08.792 "ctrlr_loss_timeout_sec": 0, 00:20:08.792 "reconnect_delay_sec": 0, 00:20:08.792 "fast_io_fail_timeout_sec": 0, 00:20:08.792 "disable_auto_failback": false, 00:20:08.792 "generate_uuids": false, 00:20:08.792 "transport_tos": 0, 00:20:08.792 "nvme_error_stat": false, 00:20:08.792 "rdma_srq_size": 0, 00:20:08.792 "io_path_stat": false, 00:20:08.792 "allow_accel_sequence": false, 00:20:08.792 "rdma_max_cq_size": 0, 00:20:08.792 "rdma_cm_event_timeout_ms": 0, 00:20:08.792 "dhchap_digests": [ 00:20:08.792 "sha256", 00:20:08.792 "sha384", 00:20:08.792 "sha512" 00:20:08.792 ], 00:20:08.792 "dhchap_dhgroups": [ 00:20:08.792 "null", 00:20:08.792 "ffdhe2048", 00:20:08.792 "ffdhe3072", 00:20:08.792 "ffdhe4096", 00:20:08.792 "ffdhe6144", 00:20:08.792 "ffdhe8192" 00:20:08.792 ] 00:20:08.792 } 00:20:08.792 }, 00:20:08.792 { 00:20:08.792 "method": "bdev_nvme_attach_controller", 00:20:08.792 "params": { 00:20:08.792 "name": "TLSTEST", 00:20:08.792 "trtype": "TCP", 00:20:08.792 "adrfam": "IPv4", 00:20:08.792 "traddr": "10.0.0.2", 00:20:08.792 "trsvcid": "4420", 00:20:08.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.792 "prchk_reftag": false, 00:20:08.792 "prchk_guard": false, 00:20:08.792 "ctrlr_loss_timeout_sec": 0, 00:20:08.792 "reconnect_delay_sec": 0, 00:20:08.792 "fast_io_fail_timeout_sec": 0, 00:20:08.792 "psk": "/tmp/tmp.soGEbjJ5PD", 00:20:08.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.792 "hdgst": false, 00:20:08.792 "ddgst": false 00:20:08.792 } 00:20:08.792 }, 00:20:08.792 { 00:20:08.792 "method": "bdev_nvme_set_hotplug", 00:20:08.792 "params": { 00:20:08.792 "period_us": 100000, 00:20:08.792 "enable": false 00:20:08.792 } 00:20:08.792 }, 00:20:08.792 { 00:20:08.792 "method": "bdev_wait_for_examine" 00:20:08.792 } 00:20:08.792 ] 00:20:08.792 }, 00:20:08.792 { 00:20:08.792 "subsystem": "nbd", 00:20:08.792 "config": [] 00:20:08.792 } 00:20:08.792 ] 00:20:08.792 }' 00:20:08.792 14:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.792 14:25:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.792 [2024-07-12 14:25:00.662035] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:20:08.792 [2024-07-12 14:25:00.662080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2579892 ] 00:20:08.792 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.792 [2024-07-12 14:25:00.711014] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.792 [2024-07-12 14:25:00.782855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.051 [2024-07-12 14:25:00.925277] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.051 [2024-07-12 14:25:00.925372] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:09.619 14:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.619 14:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:09.619 14:25:01 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:09.619 Running I/O for 10 seconds... 00:20:19.634 00:20:19.635 Latency(us) 00:20:19.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.635 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:19.635 Verification LBA range: start 0x0 length 0x2000 00:20:19.635 TLSTESTn1 : 10.02 4249.61 16.60 0.00 0.00 30074.70 4986.43 52200.85 00:20:19.635 =================================================================================================================== 00:20:19.635 Total : 4249.61 16.60 0.00 0.00 30074.70 4986.43 52200.85 00:20:19.635 0 00:20:19.635 14:25:11 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:19.635 14:25:11 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2579892 00:20:19.635 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2579892 ']' 00:20:19.635 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2579892 00:20:19.635 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:19.635 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:19.635 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2579892 00:20:19.893 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:19.893 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:19.893 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2579892' 00:20:19.893 killing process with pid 2579892 00:20:19.893 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2579892 00:20:19.893 Received shutdown signal, test time was about 10.000000 seconds 00:20:19.893 00:20:19.893 Latency(us) 00:20:19.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.893 =================================================================================================================== 00:20:19.894 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.894 [2024-07-12 14:25:11.653316] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:19.894 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2579892 00:20:19.894 14:25:11 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2579755 00:20:19.894 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2579755 ']' 00:20:19.894 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2579755 00:20:19.894 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:19.894 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:19.894 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2579755 00:20:19.894 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:19.894 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:19.894 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2579755' 00:20:19.894 killing process with pid 2579755 00:20:19.894 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2579755 00:20:19.894 [2024-07-12 14:25:11.880623] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:19.894 14:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2579755 00:20:20.153 14:25:12 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:20.153 14:25:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:20.153 14:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:20.153 14:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.153 14:25:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2581749 00:20:20.153 14:25:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2581749 00:20:20.153 14:25:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:20.153 14:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2581749 ']' 00:20:20.153 14:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.153 14:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:20.153 14:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.153 14:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:20.153 14:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.153 [2024-07-12 14:25:12.125785] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:20:20.153 [2024-07-12 14:25:12.125831] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.153 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.412 [2024-07-12 14:25:12.182367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.412 [2024-07-12 14:25:12.250720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.412 [2024-07-12 14:25:12.250761] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.412 [2024-07-12 14:25:12.250768] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.412 [2024-07-12 14:25:12.250775] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.412 [2024-07-12 14:25:12.250780] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.412 [2024-07-12 14:25:12.250802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.978 14:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:20.978 14:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:20.978 14:25:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:20.978 14:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:20.978 14:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.978 14:25:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.978 14:25:12 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.soGEbjJ5PD 00:20:20.978 14:25:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.soGEbjJ5PD 00:20:20.978 14:25:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:21.241 [2024-07-12 14:25:13.106134] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.241 14:25:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:21.499 14:25:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:21.499 [2024-07-12 14:25:13.442997] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.499 [2024-07-12 14:25:13.443166] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.499 14:25:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:21.757 malloc0 00:20:21.757 14:25:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:22.014 14:25:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.soGEbjJ5PD 00:20:22.014 [2024-07-12 14:25:13.972512] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:22.014 14:25:13 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2582177 00:20:22.014 14:25:13 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:22.014 14:25:13 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:22.014 14:25:13 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2582177 /var/tmp/bdevperf.sock 00:20:22.014 14:25:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2582177 ']' 00:20:22.014 14:25:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.014 14:25:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:22.014 14:25:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.014 14:25:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:22.014 14:25:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.272 [2024-07-12 14:25:14.034518] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:20:22.272 [2024-07-12 14:25:14.034567] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2582177 ] 00:20:22.272 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.272 [2024-07-12 14:25:14.089524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.272 [2024-07-12 14:25:14.162737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.836 14:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.836 14:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:22.836 14:25:14 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.soGEbjJ5PD 00:20:23.094 14:25:14 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:23.352 [2024-07-12 14:25:15.137426] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.352 nvme0n1 00:20:23.352 14:25:15 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:23.352 Running I/O for 1 seconds... 00:20:24.729 00:20:24.729 Latency(us) 00:20:24.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.729 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:24.729 Verification LBA range: start 0x0 length 0x2000 00:20:24.729 nvme0n1 : 1.02 4713.40 18.41 0.00 0.00 26942.54 6667.58 30773.43 00:20:24.729 =================================================================================================================== 00:20:24.729 Total : 4713.40 18.41 0.00 0.00 26942.54 6667.58 30773.43 00:20:24.729 0 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2582177 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2582177 ']' 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2582177 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2582177 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2582177' 00:20:24.729 killing process with pid 2582177 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2582177 00:20:24.729 Received shutdown signal, test time was about 1.000000 seconds 00:20:24.729 00:20:24.729 Latency(us) 00:20:24.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.729 =================================================================================================================== 00:20:24.729 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2582177 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2581749 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2581749 ']' 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2581749 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2581749 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2581749' 00:20:24.729 killing process with pid 2581749 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2581749 00:20:24.729 [2024-07-12 14:25:16.606105] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:24.729 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2581749 00:20:24.989 14:25:16 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:20:24.989 14:25:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:24.989 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:24.989 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.989 14:25:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2582649 00:20:24.989 14:25:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2582649 00:20:24.989 14:25:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:24.989 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2582649 ']' 00:20:24.989 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.989 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.989 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.989 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.989 14:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.989 [2024-07-12 14:25:16.848573] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:20:24.989 [2024-07-12 14:25:16.848619] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.989 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.989 [2024-07-12 14:25:16.903020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.989 [2024-07-12 14:25:16.981732] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.989 [2024-07-12 14:25:16.981765] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.989 [2024-07-12 14:25:16.981772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.989 [2024-07-12 14:25:16.981778] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.989 [2024-07-12 14:25:16.981783] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.989 [2024-07-12 14:25:16.981804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.926 [2024-07-12 14:25:17.688925] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.926 malloc0 00:20:25.926 [2024-07-12 14:25:17.717219] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.926 [2024-07-12 14:25:17.717403] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2582733 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2582733 /var/tmp/bdevperf.sock 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2582733 ']' 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:25.926 14:25:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.926 [2024-07-12 14:25:17.785970] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:20:25.926 [2024-07-12 14:25:17.786011] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2582733 ] 00:20:25.926 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.926 [2024-07-12 14:25:17.838594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.926 [2024-07-12 14:25:17.912275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.864 14:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.864 14:25:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:26.864 14:25:18 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.soGEbjJ5PD 00:20:26.864 14:25:18 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:27.123 [2024-07-12 14:25:18.919477] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.123 nvme0n1 00:20:27.123 14:25:19 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:27.123 Running I/O for 1 seconds... 00:20:28.503 00:20:28.503 Latency(us) 00:20:28.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.503 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:28.503 Verification LBA range: start 0x0 length 0x2000 00:20:28.503 nvme0n1 : 1.01 4853.89 18.96 0.00 0.00 26180.75 5128.90 33964.74 00:20:28.503 =================================================================================================================== 00:20:28.503 Total : 4853.89 18.96 0.00 0.00 26180.75 5128.90 33964.74 00:20:28.503 0 00:20:28.503 14:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:28.503 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.503 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.503 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.503 14:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:20:28.503 "subsystems": [ 00:20:28.503 { 00:20:28.503 "subsystem": "keyring", 00:20:28.503 "config": [ 00:20:28.503 { 00:20:28.503 "method": "keyring_file_add_key", 00:20:28.503 "params": { 00:20:28.503 "name": "key0", 00:20:28.503 "path": "/tmp/tmp.soGEbjJ5PD" 00:20:28.503 } 00:20:28.503 } 00:20:28.503 ] 00:20:28.503 }, 00:20:28.503 { 00:20:28.503 "subsystem": "iobuf", 00:20:28.503 "config": [ 00:20:28.503 { 00:20:28.503 "method": "iobuf_set_options", 00:20:28.503 "params": { 00:20:28.503 "small_pool_count": 8192, 00:20:28.503 "large_pool_count": 1024, 00:20:28.503 "small_bufsize": 8192, 00:20:28.503 "large_bufsize": 135168 00:20:28.503 } 00:20:28.503 } 00:20:28.503 ] 00:20:28.503 }, 00:20:28.503 { 00:20:28.503 "subsystem": "sock", 00:20:28.503 "config": [ 00:20:28.503 { 00:20:28.503 "method": "sock_set_default_impl", 00:20:28.503 "params": { 00:20:28.503 "impl_name": "posix" 00:20:28.503 } 00:20:28.503 }, 00:20:28.503 { 00:20:28.503 "method": "sock_impl_set_options", 00:20:28.503 "params": { 00:20:28.503 "impl_name": "ssl", 00:20:28.503 "recv_buf_size": 4096, 00:20:28.503 "send_buf_size": 4096, 00:20:28.503 "enable_recv_pipe": true, 00:20:28.503 "enable_quickack": false, 00:20:28.503 "enable_placement_id": 0, 00:20:28.503 "enable_zerocopy_send_server": true, 00:20:28.503 "enable_zerocopy_send_client": false, 00:20:28.503 "zerocopy_threshold": 0, 00:20:28.503 "tls_version": 0, 00:20:28.503 "enable_ktls": false 00:20:28.503 } 00:20:28.503 }, 00:20:28.503 { 00:20:28.503 "method": "sock_impl_set_options", 00:20:28.503 "params": { 00:20:28.503 "impl_name": "posix", 00:20:28.503 "recv_buf_size": 2097152, 00:20:28.503 "send_buf_size": 2097152, 00:20:28.503 "enable_recv_pipe": true, 00:20:28.503 "enable_quickack": false, 00:20:28.503 "enable_placement_id": 0, 00:20:28.503 "enable_zerocopy_send_server": true, 00:20:28.503 "enable_zerocopy_send_client": false, 00:20:28.503 "zerocopy_threshold": 0, 00:20:28.503 "tls_version": 0, 00:20:28.503 "enable_ktls": false 00:20:28.503 } 00:20:28.503 } 00:20:28.503 ] 00:20:28.503 }, 00:20:28.503 { 00:20:28.503 "subsystem": "vmd", 00:20:28.503 "config": [] 00:20:28.503 }, 00:20:28.503 { 00:20:28.503 "subsystem": "accel", 00:20:28.503 "config": [ 00:20:28.503 { 00:20:28.503 "method": "accel_set_options", 00:20:28.503 "params": { 00:20:28.503 "small_cache_size": 128, 00:20:28.503 "large_cache_size": 16, 00:20:28.503 "task_count": 2048, 00:20:28.503 "sequence_count": 2048, 00:20:28.503 "buf_count": 2048 00:20:28.503 } 00:20:28.503 } 00:20:28.503 ] 00:20:28.503 }, 00:20:28.503 { 00:20:28.503 "subsystem": "bdev", 00:20:28.503 "config": [ 00:20:28.503 { 00:20:28.503 "method": "bdev_set_options", 00:20:28.503 "params": { 00:20:28.503 "bdev_io_pool_size": 65535, 00:20:28.503 "bdev_io_cache_size": 256, 00:20:28.503 "bdev_auto_examine": true, 00:20:28.503 "iobuf_small_cache_size": 128, 00:20:28.503 "iobuf_large_cache_size": 16 00:20:28.503 } 00:20:28.503 }, 00:20:28.503 { 00:20:28.503 "method": "bdev_raid_set_options", 00:20:28.503 "params": { 00:20:28.503 "process_window_size_kb": 1024 00:20:28.503 } 00:20:28.503 }, 00:20:28.503 { 00:20:28.503 "method": "bdev_iscsi_set_options", 00:20:28.503 "params": { 00:20:28.503 "timeout_sec": 30 00:20:28.503 } 00:20:28.503 }, 00:20:28.503 { 00:20:28.503 "method": "bdev_nvme_set_options", 00:20:28.503 "params": { 00:20:28.503 "action_on_timeout": "none", 00:20:28.503 "timeout_us": 0, 00:20:28.503 "timeout_admin_us": 0, 00:20:28.503 "keep_alive_timeout_ms": 10000, 00:20:28.503 "arbitration_burst": 0, 00:20:28.503 "low_priority_weight": 0, 00:20:28.503 "medium_priority_weight": 0, 00:20:28.503 "high_priority_weight": 0, 00:20:28.503 "nvme_adminq_poll_period_us": 10000, 00:20:28.503 "nvme_ioq_poll_period_us": 0, 00:20:28.503 "io_queue_requests": 0, 00:20:28.503 "delay_cmd_submit": true, 00:20:28.503 "transport_retry_count": 4, 00:20:28.503 "bdev_retry_count": 3, 00:20:28.503 "transport_ack_timeout": 0, 00:20:28.503 "ctrlr_loss_timeout_sec": 0, 00:20:28.503 "reconnect_delay_sec": 0, 00:20:28.503 "fast_io_fail_timeout_sec": 0, 00:20:28.503 "disable_auto_failback": false, 00:20:28.503 "generate_uuids": false, 00:20:28.503 "transport_tos": 0, 00:20:28.503 "nvme_error_stat": false, 00:20:28.503 "rdma_srq_size": 0, 00:20:28.503 "io_path_stat": false, 00:20:28.503 "allow_accel_sequence": false, 00:20:28.503 "rdma_max_cq_size": 0, 00:20:28.503 "rdma_cm_event_timeout_ms": 0, 00:20:28.503 "dhchap_digests": [ 00:20:28.503 "sha256", 00:20:28.503 "sha384", 00:20:28.503 "sha512" 00:20:28.503 ], 00:20:28.503 "dhchap_dhgroups": [ 00:20:28.503 "null", 00:20:28.503 "ffdhe2048", 00:20:28.503 "ffdhe3072", 00:20:28.503 "ffdhe4096", 00:20:28.503 "ffdhe6144", 00:20:28.504 "ffdhe8192" 00:20:28.504 ] 00:20:28.504 } 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "method": "bdev_nvme_set_hotplug", 00:20:28.504 "params": { 00:20:28.504 "period_us": 100000, 00:20:28.504 "enable": false 00:20:28.504 } 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "method": "bdev_malloc_create", 00:20:28.504 "params": { 00:20:28.504 "name": "malloc0", 00:20:28.504 "num_blocks": 8192, 00:20:28.504 "block_size": 4096, 00:20:28.504 "physical_block_size": 4096, 00:20:28.504 "uuid": "72038e84-ba39-4afd-a912-12923c62e86f", 00:20:28.504 "optimal_io_boundary": 0 00:20:28.504 } 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "method": "bdev_wait_for_examine" 00:20:28.504 } 00:20:28.504 ] 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "subsystem": "nbd", 00:20:28.504 "config": [] 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "subsystem": "scheduler", 00:20:28.504 "config": [ 00:20:28.504 { 00:20:28.504 "method": "framework_set_scheduler", 00:20:28.504 "params": { 00:20:28.504 "name": "static" 00:20:28.504 } 00:20:28.504 } 00:20:28.504 ] 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "subsystem": "nvmf", 00:20:28.504 "config": [ 00:20:28.504 { 00:20:28.504 "method": "nvmf_set_config", 00:20:28.504 "params": { 00:20:28.504 "discovery_filter": "match_any", 00:20:28.504 "admin_cmd_passthru": { 00:20:28.504 "identify_ctrlr": false 00:20:28.504 } 00:20:28.504 } 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "method": "nvmf_set_max_subsystems", 00:20:28.504 "params": { 00:20:28.504 "max_subsystems": 1024 00:20:28.504 } 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "method": "nvmf_set_crdt", 00:20:28.504 "params": { 00:20:28.504 "crdt1": 0, 00:20:28.504 "crdt2": 0, 00:20:28.504 "crdt3": 0 00:20:28.504 } 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "method": "nvmf_create_transport", 00:20:28.504 "params": { 00:20:28.504 "trtype": "TCP", 00:20:28.504 "max_queue_depth": 128, 00:20:28.504 "max_io_qpairs_per_ctrlr": 127, 00:20:28.504 "in_capsule_data_size": 4096, 00:20:28.504 "max_io_size": 131072, 00:20:28.504 "io_unit_size": 131072, 00:20:28.504 "max_aq_depth": 128, 00:20:28.504 "num_shared_buffers": 511, 00:20:28.504 "buf_cache_size": 4294967295, 00:20:28.504 "dif_insert_or_strip": false, 00:20:28.504 "zcopy": false, 00:20:28.504 "c2h_success": false, 00:20:28.504 "sock_priority": 0, 00:20:28.504 "abort_timeout_sec": 1, 00:20:28.504 "ack_timeout": 0, 00:20:28.504 "data_wr_pool_size": 0 00:20:28.504 } 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "method": "nvmf_create_subsystem", 00:20:28.504 "params": { 00:20:28.504 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.504 "allow_any_host": false, 00:20:28.504 "serial_number": "00000000000000000000", 00:20:28.504 "model_number": "SPDK bdev Controller", 00:20:28.504 "max_namespaces": 32, 00:20:28.504 "min_cntlid": 1, 00:20:28.504 "max_cntlid": 65519, 00:20:28.504 "ana_reporting": false 00:20:28.504 } 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "method": "nvmf_subsystem_add_host", 00:20:28.504 "params": { 00:20:28.504 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.504 "host": "nqn.2016-06.io.spdk:host1", 00:20:28.504 "psk": "key0" 00:20:28.504 } 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "method": "nvmf_subsystem_add_ns", 00:20:28.504 "params": { 00:20:28.504 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.504 "namespace": { 00:20:28.504 "nsid": 1, 00:20:28.504 "bdev_name": "malloc0", 00:20:28.504 "nguid": "72038E84BA394AFDA91212923C62E86F", 00:20:28.504 "uuid": "72038e84-ba39-4afd-a912-12923c62e86f", 00:20:28.504 "no_auto_visible": false 00:20:28.504 } 00:20:28.504 } 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "method": "nvmf_subsystem_add_listener", 00:20:28.504 "params": { 00:20:28.504 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.504 "listen_address": { 00:20:28.504 "trtype": "TCP", 00:20:28.504 "adrfam": "IPv4", 00:20:28.504 "traddr": "10.0.0.2", 00:20:28.504 "trsvcid": "4420" 00:20:28.504 }, 00:20:28.504 "secure_channel": true 00:20:28.504 } 00:20:28.504 } 00:20:28.504 ] 00:20:28.504 } 00:20:28.504 ] 00:20:28.504 }' 00:20:28.504 14:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:28.504 14:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:20:28.504 "subsystems": [ 00:20:28.504 { 00:20:28.504 "subsystem": "keyring", 00:20:28.504 "config": [ 00:20:28.504 { 00:20:28.504 "method": "keyring_file_add_key", 00:20:28.504 "params": { 00:20:28.504 "name": "key0", 00:20:28.504 "path": "/tmp/tmp.soGEbjJ5PD" 00:20:28.504 } 00:20:28.504 } 00:20:28.504 ] 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "subsystem": "iobuf", 00:20:28.504 "config": [ 00:20:28.504 { 00:20:28.504 "method": "iobuf_set_options", 00:20:28.504 "params": { 00:20:28.504 "small_pool_count": 8192, 00:20:28.504 "large_pool_count": 1024, 00:20:28.504 "small_bufsize": 8192, 00:20:28.504 "large_bufsize": 135168 00:20:28.504 } 00:20:28.504 } 00:20:28.504 ] 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "subsystem": "sock", 00:20:28.504 "config": [ 00:20:28.504 { 00:20:28.504 "method": "sock_set_default_impl", 00:20:28.504 "params": { 00:20:28.504 "impl_name": "posix" 00:20:28.504 } 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "method": "sock_impl_set_options", 00:20:28.504 "params": { 00:20:28.504 "impl_name": "ssl", 00:20:28.504 "recv_buf_size": 4096, 00:20:28.504 "send_buf_size": 4096, 00:20:28.504 "enable_recv_pipe": true, 00:20:28.504 "enable_quickack": false, 00:20:28.504 "enable_placement_id": 0, 00:20:28.504 "enable_zerocopy_send_server": true, 00:20:28.504 "enable_zerocopy_send_client": false, 00:20:28.504 "zerocopy_threshold": 0, 00:20:28.504 "tls_version": 0, 00:20:28.504 "enable_ktls": false 00:20:28.504 } 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "method": "sock_impl_set_options", 00:20:28.504 "params": { 00:20:28.504 "impl_name": "posix", 00:20:28.504 "recv_buf_size": 2097152, 00:20:28.504 "send_buf_size": 2097152, 00:20:28.504 "enable_recv_pipe": true, 00:20:28.504 "enable_quickack": false, 00:20:28.504 "enable_placement_id": 0, 00:20:28.504 "enable_zerocopy_send_server": true, 00:20:28.504 "enable_zerocopy_send_client": false, 00:20:28.504 "zerocopy_threshold": 0, 00:20:28.504 "tls_version": 0, 00:20:28.504 "enable_ktls": false 00:20:28.504 } 00:20:28.504 } 00:20:28.504 ] 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "subsystem": "vmd", 00:20:28.504 "config": [] 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "subsystem": "accel", 00:20:28.504 "config": [ 00:20:28.504 { 00:20:28.504 "method": "accel_set_options", 00:20:28.504 "params": { 00:20:28.504 "small_cache_size": 128, 00:20:28.504 "large_cache_size": 16, 00:20:28.504 "task_count": 2048, 00:20:28.504 "sequence_count": 2048, 00:20:28.504 "buf_count": 2048 00:20:28.504 } 00:20:28.504 } 00:20:28.504 ] 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "subsystem": "bdev", 00:20:28.504 "config": [ 00:20:28.504 { 00:20:28.504 "method": "bdev_set_options", 00:20:28.504 "params": { 00:20:28.504 "bdev_io_pool_size": 65535, 00:20:28.504 "bdev_io_cache_size": 256, 00:20:28.504 "bdev_auto_examine": true, 00:20:28.504 "iobuf_small_cache_size": 128, 00:20:28.504 "iobuf_large_cache_size": 16 00:20:28.504 } 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "method": "bdev_raid_set_options", 00:20:28.504 "params": { 00:20:28.504 "process_window_size_kb": 1024 00:20:28.504 } 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "method": "bdev_iscsi_set_options", 00:20:28.504 "params": { 00:20:28.504 "timeout_sec": 30 00:20:28.504 } 00:20:28.504 }, 00:20:28.504 { 00:20:28.504 "method": "bdev_nvme_set_options", 00:20:28.504 "params": { 00:20:28.504 "action_on_timeout": "none", 00:20:28.504 "timeout_us": 0, 00:20:28.504 "timeout_admin_us": 0, 00:20:28.504 "keep_alive_timeout_ms": 10000, 00:20:28.504 "arbitration_burst": 0, 00:20:28.504 "low_priority_weight": 0, 00:20:28.504 "medium_priority_weight": 0, 00:20:28.504 "high_priority_weight": 0, 00:20:28.504 "nvme_adminq_poll_period_us": 10000, 00:20:28.504 "nvme_ioq_poll_period_us": 0, 00:20:28.505 "io_queue_requests": 512, 00:20:28.505 "delay_cmd_submit": true, 00:20:28.505 "transport_retry_count": 4, 00:20:28.505 "bdev_retry_count": 3, 00:20:28.505 "transport_ack_timeout": 0, 00:20:28.505 "ctrlr_loss_timeout_sec": 0, 00:20:28.505 "reconnect_delay_sec": 0, 00:20:28.505 "fast_io_fail_timeout_sec": 0, 00:20:28.505 "disable_auto_failback": false, 00:20:28.505 "generate_uuids": false, 00:20:28.505 "transport_tos": 0, 00:20:28.505 "nvme_error_stat": false, 00:20:28.505 "rdma_srq_size": 0, 00:20:28.505 "io_path_stat": false, 00:20:28.505 "allow_accel_sequence": false, 00:20:28.505 "rdma_max_cq_size": 0, 00:20:28.505 "rdma_cm_event_timeout_ms": 0, 00:20:28.505 "dhchap_digests": [ 00:20:28.505 "sha256", 00:20:28.505 "sha384", 00:20:28.505 "sha512" 00:20:28.505 ], 00:20:28.505 "dhchap_dhgroups": [ 00:20:28.505 "null", 00:20:28.505 "ffdhe2048", 00:20:28.505 "ffdhe3072", 00:20:28.505 "ffdhe4096", 00:20:28.505 "ffdhe6144", 00:20:28.505 "ffdhe8192" 00:20:28.505 ] 00:20:28.505 } 00:20:28.505 }, 00:20:28.505 { 00:20:28.505 "method": "bdev_nvme_attach_controller", 00:20:28.505 "params": { 00:20:28.505 "name": "nvme0", 00:20:28.505 "trtype": "TCP", 00:20:28.505 "adrfam": "IPv4", 00:20:28.505 "traddr": "10.0.0.2", 00:20:28.505 "trsvcid": "4420", 00:20:28.505 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.505 "prchk_reftag": false, 00:20:28.505 "prchk_guard": false, 00:20:28.505 "ctrlr_loss_timeout_sec": 0, 00:20:28.505 "reconnect_delay_sec": 0, 00:20:28.505 "fast_io_fail_timeout_sec": 0, 00:20:28.505 "psk": "key0", 00:20:28.505 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:28.505 "hdgst": false, 00:20:28.505 "ddgst": false 00:20:28.505 } 00:20:28.505 }, 00:20:28.505 { 00:20:28.505 "method": "bdev_nvme_set_hotplug", 00:20:28.505 "params": { 00:20:28.505 "period_us": 100000, 00:20:28.505 "enable": false 00:20:28.505 } 00:20:28.505 }, 00:20:28.505 { 00:20:28.505 "method": "bdev_enable_histogram", 00:20:28.505 "params": { 00:20:28.505 "name": "nvme0n1", 00:20:28.505 "enable": true 00:20:28.505 } 00:20:28.505 }, 00:20:28.505 { 00:20:28.505 "method": "bdev_wait_for_examine" 00:20:28.505 } 00:20:28.505 ] 00:20:28.505 }, 00:20:28.505 { 00:20:28.505 "subsystem": "nbd", 00:20:28.505 "config": [] 00:20:28.505 } 00:20:28.505 ] 00:20:28.505 }' 00:20:28.505 14:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2582733 00:20:28.505 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2582733 ']' 00:20:28.505 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2582733 00:20:28.505 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:28.505 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:28.505 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2582733 00:20:28.764 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:28.764 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:28.764 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2582733' 00:20:28.764 killing process with pid 2582733 00:20:28.764 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2582733 00:20:28.764 Received shutdown signal, test time was about 1.000000 seconds 00:20:28.764 00:20:28.764 Latency(us) 00:20:28.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.764 =================================================================================================================== 00:20:28.764 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:28.764 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2582733 00:20:28.764 14:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2582649 00:20:28.764 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2582649 ']' 00:20:28.764 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2582649 00:20:28.764 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:28.764 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:28.764 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2582649 00:20:28.764 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:28.764 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:28.764 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2582649' 00:20:28.764 killing process with pid 2582649 00:20:28.764 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2582649 00:20:28.764 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2582649 00:20:29.023 14:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:29.023 14:25:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:29.023 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:29.023 14:25:20 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:20:29.023 "subsystems": [ 00:20:29.023 { 00:20:29.023 "subsystem": "keyring", 00:20:29.023 "config": [ 00:20:29.023 { 00:20:29.023 "method": "keyring_file_add_key", 00:20:29.023 "params": { 00:20:29.023 "name": "key0", 00:20:29.023 "path": "/tmp/tmp.soGEbjJ5PD" 00:20:29.023 } 00:20:29.023 } 00:20:29.023 ] 00:20:29.023 }, 00:20:29.023 { 00:20:29.023 "subsystem": "iobuf", 00:20:29.023 "config": [ 00:20:29.023 { 00:20:29.023 "method": "iobuf_set_options", 00:20:29.023 "params": { 00:20:29.023 "small_pool_count": 8192, 00:20:29.023 "large_pool_count": 1024, 00:20:29.023 "small_bufsize": 8192, 00:20:29.023 "large_bufsize": 135168 00:20:29.023 } 00:20:29.023 } 00:20:29.023 ] 00:20:29.023 }, 00:20:29.023 { 00:20:29.023 "subsystem": "sock", 00:20:29.023 "config": [ 00:20:29.023 { 00:20:29.023 "method": "sock_set_default_impl", 00:20:29.023 "params": { 00:20:29.023 "impl_name": "posix" 00:20:29.023 } 00:20:29.023 }, 00:20:29.023 { 00:20:29.023 "method": "sock_impl_set_options", 00:20:29.023 "params": { 00:20:29.023 "impl_name": "ssl", 00:20:29.023 "recv_buf_size": 4096, 00:20:29.023 "send_buf_size": 4096, 00:20:29.023 "enable_recv_pipe": true, 00:20:29.023 "enable_quickack": false, 00:20:29.023 "enable_placement_id": 0, 00:20:29.023 "enable_zerocopy_send_server": true, 00:20:29.023 "enable_zerocopy_send_client": false, 00:20:29.023 "zerocopy_threshold": 0, 00:20:29.023 "tls_version": 0, 00:20:29.023 "enable_ktls": false 00:20:29.023 } 00:20:29.023 }, 00:20:29.023 { 00:20:29.023 "method": "sock_impl_set_options", 00:20:29.023 "params": { 00:20:29.023 "impl_name": "posix", 00:20:29.023 "recv_buf_size": 2097152, 00:20:29.023 "send_buf_size": 2097152, 00:20:29.023 "enable_recv_pipe": true, 00:20:29.023 "enable_quickack": false, 00:20:29.023 "enable_placement_id": 0, 00:20:29.023 "enable_zerocopy_send_server": true, 00:20:29.023 "enable_zerocopy_send_client": false, 00:20:29.023 "zerocopy_threshold": 0, 00:20:29.023 "tls_version": 0, 00:20:29.023 "enable_ktls": false 00:20:29.023 } 00:20:29.023 } 00:20:29.023 ] 00:20:29.023 }, 00:20:29.023 { 00:20:29.023 "subsystem": "vmd", 00:20:29.023 "config": [] 00:20:29.023 }, 00:20:29.023 { 00:20:29.023 "subsystem": "accel", 00:20:29.023 "config": [ 00:20:29.023 { 00:20:29.023 "method": "accel_set_options", 00:20:29.023 "params": { 00:20:29.023 "small_cache_size": 128, 00:20:29.023 "large_cache_size": 16, 00:20:29.023 "task_count": 2048, 00:20:29.023 "sequence_count": 2048, 00:20:29.023 "buf_count": 2048 00:20:29.023 } 00:20:29.023 } 00:20:29.023 ] 00:20:29.023 }, 00:20:29.023 { 00:20:29.023 "subsystem": "bdev", 00:20:29.023 "config": [ 00:20:29.023 { 00:20:29.023 "method": "bdev_set_options", 00:20:29.023 "params": { 00:20:29.023 "bdev_io_pool_size": 65535, 00:20:29.024 "bdev_io_cache_size": 256, 00:20:29.024 "bdev_auto_examine": true, 00:20:29.024 "iobuf_small_cache_size": 128, 00:20:29.024 "iobuf_large_cache_size": 16 00:20:29.024 } 00:20:29.024 }, 00:20:29.024 { 00:20:29.024 "method": "bdev_raid_set_options", 00:20:29.024 "params": { 00:20:29.024 "process_window_size_kb": 1024 00:20:29.024 } 00:20:29.024 }, 00:20:29.024 { 00:20:29.024 "method": "bdev_iscsi_set_options", 00:20:29.024 "params": { 00:20:29.024 "timeout_sec": 30 00:20:29.024 } 00:20:29.024 }, 00:20:29.024 { 00:20:29.024 "method": "bdev_nvme_set_options", 00:20:29.024 "params": { 00:20:29.024 "action_on_timeout": "none", 00:20:29.024 "timeout_us": 0, 00:20:29.024 "timeout_admin_us": 0, 00:20:29.024 "keep_alive_timeout_ms": 10000, 00:20:29.024 "arbitration_burst": 0, 00:20:29.024 "low_priority_weight": 0, 00:20:29.024 "medium_priority_weight": 0, 00:20:29.024 "high_priority_weight": 0, 00:20:29.024 "nvme_adminq_poll_period_us": 10000, 00:20:29.024 "nvme_ioq_poll_period_us": 0, 00:20:29.024 "io_queue_requests": 0, 00:20:29.024 "delay_cmd_submit": true, 00:20:29.024 "transport_retry_count": 4, 00:20:29.024 "bdev_retry_count": 3, 00:20:29.024 "transport_ack_timeout": 0, 00:20:29.024 "ctrlr_loss_timeout_sec": 0, 00:20:29.024 "reconnect_delay_sec": 0, 00:20:29.024 "fast_io_fail_timeout_sec": 0, 00:20:29.024 "disable_auto_failback": false, 00:20:29.024 "generate_uuids": false, 00:20:29.024 "transport_tos": 0, 00:20:29.024 "nvme_error_stat": false, 00:20:29.024 "rdma_srq_size": 0, 00:20:29.024 "io_path_stat": false, 00:20:29.024 "allow_accel_sequence": false, 00:20:29.024 "rdma_max_cq_size": 0, 00:20:29.024 "rdma_cm_event_timeout_ms": 0, 00:20:29.024 "dhchap_digests": [ 00:20:29.024 "sha256", 00:20:29.024 "sha384", 00:20:29.024 "sha512" 00:20:29.024 ], 00:20:29.024 "dhchap_dhgroups": [ 00:20:29.024 "null", 00:20:29.024 "ffdhe2048", 00:20:29.024 "ffdhe3072", 00:20:29.024 "ffdhe4096", 00:20:29.024 "ffdhe6144", 00:20:29.024 "ffdhe8192" 00:20:29.024 ] 00:20:29.024 } 00:20:29.024 }, 00:20:29.024 { 00:20:29.024 "method": "bdev_nvme_set_hotplug", 00:20:29.024 "params": { 00:20:29.024 "period_us": 100000, 00:20:29.024 "enable": false 00:20:29.024 } 00:20:29.024 }, 00:20:29.024 { 00:20:29.024 "method": "bdev_malloc_create", 00:20:29.024 "params": { 00:20:29.024 "name": "malloc0", 00:20:29.024 "num_blocks": 8192, 00:20:29.024 "block_size": 4096, 00:20:29.024 "physical_block_size": 4096, 00:20:29.024 "uuid": "72038e84-ba39-4afd-a912-12923c62e86f", 00:20:29.024 "optimal_io_boundary": 0 00:20:29.024 } 00:20:29.024 }, 00:20:29.024 { 00:20:29.024 "method": "bdev_wait_for_examine" 00:20:29.024 } 00:20:29.024 ] 00:20:29.024 }, 00:20:29.024 { 00:20:29.024 "subsystem": "nbd", 00:20:29.024 "config": [] 00:20:29.024 }, 00:20:29.024 { 00:20:29.024 "subsystem": "scheduler", 00:20:29.024 "config": [ 00:20:29.024 { 00:20:29.024 "method": "framework_set_scheduler", 00:20:29.024 "params": { 00:20:29.024 "name": "static" 00:20:29.024 } 00:20:29.024 } 00:20:29.024 ] 00:20:29.024 }, 00:20:29.024 { 00:20:29.024 "subsystem": "nvmf", 00:20:29.024 "config": [ 00:20:29.024 { 00:20:29.024 "method": "nvmf_set_config", 00:20:29.024 "params": { 00:20:29.024 "discovery_filter": "match_any", 00:20:29.024 "admin_cmd_passthru": { 00:20:29.024 "identify_ctrlr": false 00:20:29.024 } 00:20:29.024 } 00:20:29.024 }, 00:20:29.024 { 00:20:29.024 "method": "nvmf_set_max_subsystems", 00:20:29.024 "params": { 00:20:29.024 "max_subsystems": 1024 00:20:29.024 } 00:20:29.024 }, 00:20:29.024 { 00:20:29.024 "method": "nvmf_set_crdt", 00:20:29.024 "params": { 00:20:29.024 "crdt1": 0, 00:20:29.024 "crdt2": 0, 00:20:29.024 "crdt3": 0 00:20:29.024 } 00:20:29.024 }, 00:20:29.024 { 00:20:29.024 "method": "nvmf_create_transport", 00:20:29.024 "params": { 00:20:29.024 "trtype": "TCP", 00:20:29.024 "max_queue_depth": 128, 00:20:29.024 "max_io_qpairs_per_ctrlr": 127, 00:20:29.024 "in_capsule_data_size": 4096, 00:20:29.024 "max_io_size": 131072, 00:20:29.024 "io_unit_size": 131072, 00:20:29.024 "max_aq_depth": 128, 00:20:29.024 "num_shared_buffers": 511, 00:20:29.024 "buf_cache_size": 4294967295, 00:20:29.024 "dif_insert_or_strip": false, 00:20:29.024 "zcopy": false, 00:20:29.024 "c2h_success": false, 00:20:29.024 "sock_priority": 0, 00:20:29.024 "abort_timeout_sec": 1, 00:20:29.024 "ack_timeout": 0, 00:20:29.024 "data_wr_pool_size": 0 00:20:29.024 } 00:20:29.024 }, 00:20:29.024 { 00:20:29.024 "method": "nvmf_create_subsystem", 00:20:29.024 "params": { 00:20:29.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.024 "allow_any_host": false, 00:20:29.024 "serial_number": "00000000000000000000", 00:20:29.024 "model_number": "SPDK bdev Controller", 00:20:29.024 "max_namespaces": 32, 00:20:29.024 "min_cntlid": 1, 00:20:29.024 "max_cntlid": 65519, 00:20:29.024 "ana_reporting": false 00:20:29.024 } 00:20:29.024 }, 00:20:29.024 { 00:20:29.024 "method": "nvmf_subsystem_add_host", 00:20:29.024 "params": { 00:20:29.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.024 "host": "nqn.2016-06.io.spdk:host1", 00:20:29.024 "psk": "key0" 00:20:29.024 } 00:20:29.024 }, 00:20:29.024 { 00:20:29.024 "method": "nvmf_subsystem_add_ns", 00:20:29.024 "params": { 00:20:29.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.024 "namespace": { 00:20:29.024 "nsid": 1, 00:20:29.024 "bdev_name": "malloc0", 00:20:29.024 "nguid": "72038E84BA394AFDA91212923C62E86F", 00:20:29.024 "uuid": "72038e84-ba39-4afd-a912-12923c62e86f", 00:20:29.024 "no_auto_visible": false 00:20:29.024 } 00:20:29.024 } 00:20:29.024 }, 00:20:29.024 { 00:20:29.024 "method": "nvmf_subsystem_add_listener", 00:20:29.024 "params": { 00:20:29.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.024 "listen_address": { 00:20:29.024 "trtype": "TCP", 00:20:29.024 "adrfam": "IPv4", 00:20:29.024 "traddr": "10.0.0.2", 00:20:29.024 "trsvcid": "4420" 00:20:29.024 }, 00:20:29.024 "secure_channel": true 00:20:29.024 } 00:20:29.024 } 00:20:29.024 ] 00:20:29.024 } 00:20:29.024 ] 00:20:29.024 }' 00:20:29.024 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.024 14:25:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2583328 00:20:29.024 14:25:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2583328 00:20:29.024 14:25:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:29.024 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2583328 ']' 00:20:29.024 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.024 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.024 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.024 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.024 14:25:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.024 [2024-07-12 14:25:21.007913] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:20:29.024 [2024-07-12 14:25:21.007959] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.284 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.284 [2024-07-12 14:25:21.063497] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.284 [2024-07-12 14:25:21.142305] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.284 [2024-07-12 14:25:21.142342] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.284 [2024-07-12 14:25:21.142349] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.284 [2024-07-12 14:25:21.142355] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.284 [2024-07-12 14:25:21.142361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.284 [2024-07-12 14:25:21.142418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.543 [2024-07-12 14:25:21.353200] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.543 [2024-07-12 14:25:21.385229] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:29.543 [2024-07-12 14:25:21.392712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.802 14:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.802 14:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:29.802 14:25:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:29.802 14:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:29.802 14:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.061 14:25:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.061 14:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2583461 00:20:30.061 14:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2583461 /var/tmp/bdevperf.sock 00:20:30.061 14:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2583461 ']' 00:20:30.061 14:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.061 14:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.061 14:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:30.062 14:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.062 14:25:21 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:20:30.062 "subsystems": [ 00:20:30.062 { 00:20:30.062 "subsystem": "keyring", 00:20:30.062 "config": [ 00:20:30.062 { 00:20:30.062 "method": "keyring_file_add_key", 00:20:30.062 "params": { 00:20:30.062 "name": "key0", 00:20:30.062 "path": "/tmp/tmp.soGEbjJ5PD" 00:20:30.062 } 00:20:30.062 } 00:20:30.062 ] 00:20:30.062 }, 00:20:30.062 { 00:20:30.062 "subsystem": "iobuf", 00:20:30.062 "config": [ 00:20:30.062 { 00:20:30.062 "method": "iobuf_set_options", 00:20:30.062 "params": { 00:20:30.062 "small_pool_count": 8192, 00:20:30.062 "large_pool_count": 1024, 00:20:30.062 "small_bufsize": 8192, 00:20:30.062 "large_bufsize": 135168 00:20:30.062 } 00:20:30.062 } 00:20:30.062 ] 00:20:30.062 }, 00:20:30.062 { 00:20:30.062 "subsystem": "sock", 00:20:30.062 "config": [ 00:20:30.062 { 00:20:30.062 "method": "sock_set_default_impl", 00:20:30.062 "params": { 00:20:30.062 "impl_name": "posix" 00:20:30.062 } 00:20:30.062 }, 00:20:30.062 { 00:20:30.062 "method": "sock_impl_set_options", 00:20:30.062 "params": { 00:20:30.062 "impl_name": "ssl", 00:20:30.062 "recv_buf_size": 4096, 00:20:30.062 "send_buf_size": 4096, 00:20:30.062 "enable_recv_pipe": true, 00:20:30.062 "enable_quickack": false, 00:20:30.062 "enable_placement_id": 0, 00:20:30.062 "enable_zerocopy_send_server": true, 00:20:30.062 "enable_zerocopy_send_client": false, 00:20:30.062 "zerocopy_threshold": 0, 00:20:30.062 "tls_version": 0, 00:20:30.062 "enable_ktls": false 00:20:30.062 } 00:20:30.062 }, 00:20:30.062 { 00:20:30.062 "method": "sock_impl_set_options", 00:20:30.062 "params": { 00:20:30.062 "impl_name": "posix", 00:20:30.062 "recv_buf_size": 2097152, 00:20:30.062 "send_buf_size": 2097152, 00:20:30.062 "enable_recv_pipe": true, 00:20:30.062 "enable_quickack": false, 00:20:30.062 "enable_placement_id": 0, 00:20:30.062 "enable_zerocopy_send_server": true, 00:20:30.062 "enable_zerocopy_send_client": false, 00:20:30.062 "zerocopy_threshold": 0, 00:20:30.062 "tls_version": 0, 00:20:30.062 "enable_ktls": false 00:20:30.062 } 00:20:30.062 } 00:20:30.062 ] 00:20:30.062 }, 00:20:30.062 { 00:20:30.062 "subsystem": "vmd", 00:20:30.062 "config": [] 00:20:30.062 }, 00:20:30.062 { 00:20:30.062 "subsystem": "accel", 00:20:30.062 "config": [ 00:20:30.062 { 00:20:30.062 "method": "accel_set_options", 00:20:30.062 "params": { 00:20:30.062 "small_cache_size": 128, 00:20:30.062 "large_cache_size": 16, 00:20:30.062 "task_count": 2048, 00:20:30.062 "sequence_count": 2048, 00:20:30.062 "buf_count": 2048 00:20:30.062 } 00:20:30.062 } 00:20:30.062 ] 00:20:30.062 }, 00:20:30.062 { 00:20:30.062 "subsystem": "bdev", 00:20:30.062 "config": [ 00:20:30.062 { 00:20:30.062 "method": "bdev_set_options", 00:20:30.062 "params": { 00:20:30.062 "bdev_io_pool_size": 65535, 00:20:30.062 "bdev_io_cache_size": 256, 00:20:30.062 "bdev_auto_examine": true, 00:20:30.062 "iobuf_small_cache_size": 128, 00:20:30.062 "iobuf_large_cache_size": 16 00:20:30.062 } 00:20:30.062 }, 00:20:30.062 { 00:20:30.062 "method": "bdev_raid_set_options", 00:20:30.062 "params": { 00:20:30.062 "process_window_size_kb": 1024 00:20:30.062 } 00:20:30.062 }, 00:20:30.062 { 00:20:30.062 "method": "bdev_iscsi_set_options", 00:20:30.062 "params": { 00:20:30.062 "timeout_sec": 30 00:20:30.062 } 00:20:30.062 }, 00:20:30.062 { 00:20:30.062 "method": "bdev_nvme_set_options", 00:20:30.062 "params": { 00:20:30.062 "action_on_timeout": "none", 00:20:30.062 "timeout_us": 0, 00:20:30.062 "timeout_admin_us": 0, 00:20:30.062 "keep_alive_timeout_ms": 10000, 00:20:30.062 "arbitration_burst": 0, 00:20:30.062 "low_priority_weight": 0, 00:20:30.062 "medium_priority_weight": 0, 00:20:30.062 "high_priority_weight": 0, 00:20:30.062 "nvme_adminq_poll_period_us": 10000, 00:20:30.062 "nvme_ioq_poll_period_us": 0, 00:20:30.062 "io_queue_requests": 512, 00:20:30.062 "delay_cmd_submit": true, 00:20:30.062 "transport_retry_count": 4, 00:20:30.062 "bdev_retry_count": 3, 00:20:30.062 "transport_ack_timeout": 0, 00:20:30.062 "ctrlr_loss_timeout_sec": 0, 00:20:30.062 "reconnect_delay_sec": 0, 00:20:30.062 "fast_io_fail_timeout_sec": 0, 00:20:30.062 "disable_auto_failback": false, 00:20:30.062 "generate_uuids": false, 00:20:30.062 "transport_tos": 0, 00:20:30.062 "nvme_error_stat": false, 00:20:30.062 "rdma_srq_size": 0, 00:20:30.062 "io_path_stat": false, 00:20:30.062 "allow_accel_sequence": false, 00:20:30.062 "rdma_max_cq_size": 0, 00:20:30.062 "rdma_cm_event_timeout_ms": 0, 00:20:30.062 "dhchap_digests": [ 00:20:30.062 "sha256", 00:20:30.062 "sha384", 00:20:30.062 "sh 14:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.062 a512" 00:20:30.062 ], 00:20:30.062 "dhchap_dhgroups": [ 00:20:30.062 "null", 00:20:30.062 "ffdhe2048", 00:20:30.062 "ffdhe3072", 00:20:30.062 "ffdhe4096", 00:20:30.062 "ffdhe6144", 00:20:30.062 "ffdhe8192" 00:20:30.062 ] 00:20:30.062 } 00:20:30.062 }, 00:20:30.062 { 00:20:30.062 "method": "bdev_nvme_attach_controller", 00:20:30.062 "params": { 00:20:30.062 "name": "nvme0", 00:20:30.062 "trtype": "TCP", 00:20:30.062 "adrfam": "IPv4", 00:20:30.062 "traddr": "10.0.0.2", 00:20:30.062 "trsvcid": "4420", 00:20:30.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.062 "prchk_reftag": false, 00:20:30.062 "prchk_guard": false, 00:20:30.062 "ctrlr_loss_timeout_sec": 0, 00:20:30.062 "reconnect_delay_sec": 0, 00:20:30.062 "fast_io_fail_timeout_sec": 0, 00:20:30.062 "psk": "key0", 00:20:30.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:30.062 "hdgst": false, 00:20:30.062 "ddgst": false 00:20:30.062 } 00:20:30.062 }, 00:20:30.062 { 00:20:30.062 "method": "bdev_nvme_set_hotplug", 00:20:30.062 "params": { 00:20:30.062 "period_us": 100000, 00:20:30.062 "enable": false 00:20:30.062 } 00:20:30.062 }, 00:20:30.062 { 00:20:30.062 "method": "bdev_enable_histogram", 00:20:30.062 "params": { 00:20:30.062 "name": "nvme0n1", 00:20:30.062 "enable": true 00:20:30.062 } 00:20:30.062 }, 00:20:30.062 { 00:20:30.062 "method": "bdev_wait_for_examine" 00:20:30.062 } 00:20:30.062 ] 00:20:30.062 }, 00:20:30.062 { 00:20:30.062 "subsystem": "nbd", 00:20:30.062 "config": [] 00:20:30.062 } 00:20:30.062 ] 00:20:30.062 }' 00:20:30.062 14:25:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.062 [2024-07-12 14:25:21.882241] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:20:30.062 [2024-07-12 14:25:21.882286] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2583461 ] 00:20:30.062 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.062 [2024-07-12 14:25:21.935388] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.062 [2024-07-12 14:25:22.014092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.322 [2024-07-12 14:25:22.164237] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.890 14:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:30.890 14:25:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:30.890 14:25:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:30.890 14:25:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:30.890 14:25:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.890 14:25:22 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:31.150 Running I/O for 1 seconds... 00:20:32.085 00:20:32.086 Latency(us) 00:20:32.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.086 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:32.086 Verification LBA range: start 0x0 length 0x2000 00:20:32.086 nvme0n1 : 1.01 5164.61 20.17 0.00 0.00 24600.04 6553.60 25872.47 00:20:32.086 =================================================================================================================== 00:20:32.086 Total : 5164.61 20.17 0.00 0.00 24600.04 6553.60 25872.47 00:20:32.086 0 00:20:32.086 14:25:23 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:32.086 14:25:23 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:20:32.086 14:25:23 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:32.086 14:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:20:32.086 14:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:20:32.086 14:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:32.086 14:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:32.086 14:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:32.086 14:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:32.086 14:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:32.086 14:25:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:32.086 nvmf_trace.0 00:20:32.086 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:20:32.086 14:25:24 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2583461 00:20:32.086 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2583461 ']' 00:20:32.086 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2583461 00:20:32.086 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:32.086 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.086 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2583461 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2583461' 00:20:32.345 killing process with pid 2583461 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2583461 00:20:32.345 Received shutdown signal, test time was about 1.000000 seconds 00:20:32.345 00:20:32.345 Latency(us) 00:20:32.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.345 =================================================================================================================== 00:20:32.345 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2583461 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:32.345 rmmod nvme_tcp 00:20:32.345 rmmod nvme_fabrics 00:20:32.345 rmmod nvme_keyring 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2583328 ']' 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2583328 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2583328 ']' 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2583328 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.345 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2583328 00:20:32.604 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:32.604 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:32.604 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2583328' 00:20:32.604 killing process with pid 2583328 00:20:32.604 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2583328 00:20:32.604 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2583328 00:20:32.604 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:32.604 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:32.604 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:32.604 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.604 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.604 14:25:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.604 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.604 14:25:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.136 14:25:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:35.136 14:25:26 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.RgTrv6rszu /tmp/tmp.YD4t98Nxko /tmp/tmp.soGEbjJ5PD 00:20:35.136 00:20:35.136 real 1m23.258s 00:20:35.136 user 2m6.095s 00:20:35.136 sys 0m30.568s 00:20:35.136 14:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:35.136 14:25:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.136 ************************************ 00:20:35.136 END TEST nvmf_tls 00:20:35.136 ************************************ 00:20:35.136 14:25:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:35.136 14:25:26 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:35.136 14:25:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:35.136 14:25:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:35.136 14:25:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:35.136 ************************************ 00:20:35.136 START TEST nvmf_fips 00:20:35.136 ************************************ 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:35.136 * Looking for test storage... 00:20:35.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.136 14:25:26 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:35.137 Error setting digest 00:20:35.137 004201804A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:35.137 004201804A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:35.137 14:25:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:40.409 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:40.409 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:40.409 Found net devices under 0000:86:00.0: cvl_0_0 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:40.409 14:25:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:40.409 Found net devices under 0000:86:00.1: cvl_0_1 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:40.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:20:40.409 00:20:40.409 --- 10.0.0.2 ping statistics --- 00:20:40.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.409 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:40.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:20:40.409 00:20:40.409 --- 10.0.0.1 ping statistics --- 00:20:40.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.409 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2587364 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2587364 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2587364 ']' 00:20:40.409 14:25:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.410 14:25:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.410 14:25:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.410 14:25:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.410 14:25:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:40.410 [2024-07-12 14:25:32.357489] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:20:40.410 [2024-07-12 14:25:32.357537] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.410 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.410 [2024-07-12 14:25:32.415929] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.669 [2024-07-12 14:25:32.487617] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.669 [2024-07-12 14:25:32.487659] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.669 [2024-07-12 14:25:32.487665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.669 [2024-07-12 14:25:32.487670] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.669 [2024-07-12 14:25:32.487675] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.669 [2024-07-12 14:25:32.487693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.299 14:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.299 14:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:41.299 14:25:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:41.299 14:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:41.299 14:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:41.299 14:25:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.299 14:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:41.299 14:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:41.299 14:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:41.299 14:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:41.299 14:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:41.299 14:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:41.299 14:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:41.299 14:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:41.557 [2024-07-12 14:25:33.334735] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.557 [2024-07-12 14:25:33.350734] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:41.557 [2024-07-12 14:25:33.350912] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.557 [2024-07-12 14:25:33.378760] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:41.557 malloc0 00:20:41.557 14:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.557 14:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2587500 00:20:41.557 14:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2587500 /var/tmp/bdevperf.sock 00:20:41.557 14:25:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:41.557 14:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2587500 ']' 00:20:41.557 14:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.557 14:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.557 14:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.557 14:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.557 14:25:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:41.557 [2024-07-12 14:25:33.465445] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:20:41.557 [2024-07-12 14:25:33.465490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2587500 ] 00:20:41.557 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.557 [2024-07-12 14:25:33.516234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.816 [2024-07-12 14:25:33.594756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.385 14:25:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:42.385 14:25:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:42.385 14:25:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:42.644 [2024-07-12 14:25:34.412358] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:42.644 [2024-07-12 14:25:34.412429] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:42.644 TLSTESTn1 00:20:42.644 14:25:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:42.644 Running I/O for 10 seconds... 00:20:54.851 00:20:54.851 Latency(us) 00:20:54.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.851 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:54.851 Verification LBA range: start 0x0 length 0x2000 00:20:54.851 TLSTESTn1 : 10.02 5516.89 21.55 0.00 0.00 23163.55 5328.36 35788.35 00:20:54.851 =================================================================================================================== 00:20:54.851 Total : 5516.89 21.55 0.00 0.00 23163.55 5328.36 35788.35 00:20:54.851 0 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:54.851 nvmf_trace.0 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2587500 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2587500 ']' 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2587500 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2587500 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2587500' 00:20:54.851 killing process with pid 2587500 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2587500 00:20:54.851 Received shutdown signal, test time was about 10.000000 seconds 00:20:54.851 00:20:54.851 Latency(us) 00:20:54.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.851 =================================================================================================================== 00:20:54.851 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.851 [2024-07-12 14:25:44.770513] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2587500 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:54.851 14:25:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:54.851 rmmod nvme_tcp 00:20:54.851 rmmod nvme_fabrics 00:20:54.851 rmmod nvme_keyring 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2587364 ']' 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2587364 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2587364 ']' 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2587364 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2587364 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2587364' 00:20:54.851 killing process with pid 2587364 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2587364 00:20:54.851 [2024-07-12 14:25:45.056749] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2587364 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.851 14:25:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.421 14:25:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:55.421 14:25:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:55.421 00:20:55.421 real 0m20.616s 00:20:55.421 user 0m22.668s 00:20:55.421 sys 0m8.753s 00:20:55.421 14:25:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:55.421 14:25:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:55.421 ************************************ 00:20:55.421 END TEST nvmf_fips 00:20:55.421 ************************************ 00:20:55.421 14:25:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:55.421 14:25:47 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:55.421 14:25:47 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:55.421 14:25:47 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:20:55.421 14:25:47 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:20:55.421 14:25:47 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:20:55.421 14:25:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:00.699 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:00.699 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:00.699 Found net devices under 0000:86:00.0: cvl_0_0 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:00.699 Found net devices under 0000:86:00.1: cvl_0_1 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:00.699 14:25:52 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:00.699 14:25:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:00.699 14:25:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.699 14:25:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:00.700 ************************************ 00:21:00.700 START TEST nvmf_perf_adq 00:21:00.700 ************************************ 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:00.700 * Looking for test storage... 00:21:00.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:00.700 14:25:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:05.977 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:05.977 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:05.977 Found net devices under 0000:86:00.0: cvl_0_0 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:05.977 Found net devices under 0000:86:00.1: cvl_0_1 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:05.977 14:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:06.913 14:25:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:08.818 14:26:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:14.089 14:26:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:21:14.089 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:14.090 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:14.090 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:14.090 Found net devices under 0000:86:00.0: cvl_0_0 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:14.090 Found net devices under 0000:86:00.1: cvl_0_1 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:14.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:21:14.090 00:21:14.090 --- 10.0.0.2 ping statistics --- 00:21:14.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.090 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:14.090 14:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:14.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:21:14.090 00:21:14.090 --- 10.0.0.1 ping statistics --- 00:21:14.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.090 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:21:14.090 14:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.090 14:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:14.090 14:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:14.090 14:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.090 14:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:14.090 14:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:14.090 14:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.090 14:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:14.090 14:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:14.090 14:26:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:14.090 14:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:14.090 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:14.090 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:14.090 14:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2597188 00:21:14.090 14:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2597188 00:21:14.091 14:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:14.091 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2597188 ']' 00:21:14.091 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.091 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.091 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.091 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.091 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:14.091 [2024-07-12 14:26:06.094655] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:21:14.091 [2024-07-12 14:26:06.094696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.350 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.350 [2024-07-12 14:26:06.155849] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:14.350 [2024-07-12 14:26:06.237454] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.350 [2024-07-12 14:26:06.237490] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.350 [2024-07-12 14:26:06.237497] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.350 [2024-07-12 14:26:06.237504] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.350 [2024-07-12 14:26:06.237509] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.350 [2024-07-12 14:26:06.237553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.350 [2024-07-12 14:26:06.237573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.350 [2024-07-12 14:26:06.237655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.350 [2024-07-12 14:26:06.237657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.918 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.918 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:14.918 14:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:14.918 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:14.918 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.177 14:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.178 14:26:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:21:15.178 14:26:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:15.178 14:26:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:15.178 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.178 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.178 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.178 14:26:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:15.178 14:26:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:15.178 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.178 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.178 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.178 14:26:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:15.178 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.178 14:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.178 [2024-07-12 14:26:07.081969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.178 Malloc1 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.178 [2024-07-12 14:26:07.129763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2597438 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:21:15.178 14:26:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:15.178 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.736 14:26:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:17.736 14:26:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.736 14:26:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:17.736 14:26:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.736 14:26:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:21:17.736 "tick_rate": 2300000000, 00:21:17.736 "poll_groups": [ 00:21:17.736 { 00:21:17.736 "name": "nvmf_tgt_poll_group_000", 00:21:17.736 "admin_qpairs": 1, 00:21:17.736 "io_qpairs": 1, 00:21:17.736 "current_admin_qpairs": 1, 00:21:17.736 "current_io_qpairs": 1, 00:21:17.736 "pending_bdev_io": 0, 00:21:17.736 "completed_nvme_io": 20067, 00:21:17.736 "transports": [ 00:21:17.736 { 00:21:17.736 "trtype": "TCP" 00:21:17.736 } 00:21:17.736 ] 00:21:17.736 }, 00:21:17.736 { 00:21:17.736 "name": "nvmf_tgt_poll_group_001", 00:21:17.736 "admin_qpairs": 0, 00:21:17.736 "io_qpairs": 1, 00:21:17.736 "current_admin_qpairs": 0, 00:21:17.736 "current_io_qpairs": 1, 00:21:17.736 "pending_bdev_io": 0, 00:21:17.736 "completed_nvme_io": 20289, 00:21:17.736 "transports": [ 00:21:17.736 { 00:21:17.736 "trtype": "TCP" 00:21:17.736 } 00:21:17.736 ] 00:21:17.736 }, 00:21:17.736 { 00:21:17.736 "name": "nvmf_tgt_poll_group_002", 00:21:17.736 "admin_qpairs": 0, 00:21:17.736 "io_qpairs": 1, 00:21:17.736 "current_admin_qpairs": 0, 00:21:17.736 "current_io_qpairs": 1, 00:21:17.736 "pending_bdev_io": 0, 00:21:17.736 "completed_nvme_io": 20127, 00:21:17.736 "transports": [ 00:21:17.736 { 00:21:17.736 "trtype": "TCP" 00:21:17.736 } 00:21:17.736 ] 00:21:17.736 }, 00:21:17.736 { 00:21:17.736 "name": "nvmf_tgt_poll_group_003", 00:21:17.736 "admin_qpairs": 0, 00:21:17.736 "io_qpairs": 1, 00:21:17.736 "current_admin_qpairs": 0, 00:21:17.736 "current_io_qpairs": 1, 00:21:17.736 "pending_bdev_io": 0, 00:21:17.736 "completed_nvme_io": 19986, 00:21:17.736 "transports": [ 00:21:17.736 { 00:21:17.736 "trtype": "TCP" 00:21:17.736 } 00:21:17.736 ] 00:21:17.736 } 00:21:17.736 ] 00:21:17.736 }' 00:21:17.736 14:26:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:17.736 14:26:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:21:17.736 14:26:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:21:17.736 14:26:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:21:17.736 14:26:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2597438 00:21:25.871 Initializing NVMe Controllers 00:21:25.871 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:25.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:25.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:25.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:25.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:25.871 Initialization complete. Launching workers. 00:21:25.871 ======================================================== 00:21:25.871 Latency(us) 00:21:25.871 Device Information : IOPS MiB/s Average min max 00:21:25.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10599.00 41.40 6038.93 1838.56 9492.41 00:21:25.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10724.40 41.89 5969.33 2268.40 9400.45 00:21:25.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10641.10 41.57 6014.76 1691.26 13240.12 00:21:25.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10619.00 41.48 6028.31 2035.13 9736.69 00:21:25.871 ======================================================== 00:21:25.871 Total : 42583.49 166.34 6012.71 1691.26 13240.12 00:21:25.871 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:25.871 rmmod nvme_tcp 00:21:25.871 rmmod nvme_fabrics 00:21:25.871 rmmod nvme_keyring 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2597188 ']' 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2597188 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2597188 ']' 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2597188 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2597188 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2597188' 00:21:25.871 killing process with pid 2597188 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2597188 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2597188 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.871 14:26:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.777 14:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:27.777 14:26:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:21:27.777 14:26:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:29.154 14:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:31.059 14:26:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:36.333 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:36.333 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:36.333 Found net devices under 0000:86:00.0: cvl_0_0 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:36.333 Found net devices under 0000:86:00.1: cvl_0_1 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:36.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:21:36.333 00:21:36.333 --- 10.0.0.2 ping statistics --- 00:21:36.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.333 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:36.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:21:36.333 00:21:36.333 --- 10.0.0.1 ping statistics --- 00:21:36.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.333 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:36.333 14:26:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:36.333 14:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:36.333 net.core.busy_poll = 1 00:21:36.333 14:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:36.333 net.core.busy_read = 1 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2601224 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2601224 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2601224 ']' 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.334 14:26:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:36.334 [2024-07-12 14:26:28.270395] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:21:36.334 [2024-07-12 14:26:28.270462] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.334 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.334 [2024-07-12 14:26:28.329553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:36.592 [2024-07-12 14:26:28.405992] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.592 [2024-07-12 14:26:28.406031] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.592 [2024-07-12 14:26:28.406038] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.592 [2024-07-12 14:26:28.406043] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.592 [2024-07-12 14:26:28.406048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.592 [2024-07-12 14:26:28.406135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.592 [2024-07-12 14:26:28.406233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.592 [2024-07-12 14:26:28.406297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:36.592 [2024-07-12 14:26:28.406299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.160 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.160 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:37.160 14:26:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:37.160 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:37.160 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.160 14:26:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.160 14:26:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:37.160 14:26:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:37.160 14:26:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:37.160 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.160 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.160 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.160 14:26:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:37.160 14:26:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:37.160 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.160 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.419 [2024-07-12 14:26:29.259200] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.419 Malloc1 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.419 [2024-07-12 14:26:29.302796] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2601476 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:37.419 14:26:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:37.419 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.322 14:26:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:39.322 14:26:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.322 14:26:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.581 14:26:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.581 14:26:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:39.581 "tick_rate": 2300000000, 00:21:39.581 "poll_groups": [ 00:21:39.581 { 00:21:39.581 "name": "nvmf_tgt_poll_group_000", 00:21:39.581 "admin_qpairs": 1, 00:21:39.581 "io_qpairs": 2, 00:21:39.581 "current_admin_qpairs": 1, 00:21:39.581 "current_io_qpairs": 2, 00:21:39.581 "pending_bdev_io": 0, 00:21:39.581 "completed_nvme_io": 28663, 00:21:39.581 "transports": [ 00:21:39.581 { 00:21:39.581 "trtype": "TCP" 00:21:39.581 } 00:21:39.581 ] 00:21:39.581 }, 00:21:39.581 { 00:21:39.581 "name": "nvmf_tgt_poll_group_001", 00:21:39.581 "admin_qpairs": 0, 00:21:39.581 "io_qpairs": 2, 00:21:39.581 "current_admin_qpairs": 0, 00:21:39.581 "current_io_qpairs": 2, 00:21:39.581 "pending_bdev_io": 0, 00:21:39.581 "completed_nvme_io": 28947, 00:21:39.581 "transports": [ 00:21:39.581 { 00:21:39.581 "trtype": "TCP" 00:21:39.581 } 00:21:39.581 ] 00:21:39.581 }, 00:21:39.581 { 00:21:39.581 "name": "nvmf_tgt_poll_group_002", 00:21:39.581 "admin_qpairs": 0, 00:21:39.581 "io_qpairs": 0, 00:21:39.581 "current_admin_qpairs": 0, 00:21:39.581 "current_io_qpairs": 0, 00:21:39.581 "pending_bdev_io": 0, 00:21:39.581 "completed_nvme_io": 0, 00:21:39.581 "transports": [ 00:21:39.581 { 00:21:39.581 "trtype": "TCP" 00:21:39.581 } 00:21:39.581 ] 00:21:39.581 }, 00:21:39.581 { 00:21:39.581 "name": "nvmf_tgt_poll_group_003", 00:21:39.581 "admin_qpairs": 0, 00:21:39.581 "io_qpairs": 0, 00:21:39.581 "current_admin_qpairs": 0, 00:21:39.581 "current_io_qpairs": 0, 00:21:39.581 "pending_bdev_io": 0, 00:21:39.581 "completed_nvme_io": 0, 00:21:39.581 "transports": [ 00:21:39.581 { 00:21:39.581 "trtype": "TCP" 00:21:39.581 } 00:21:39.581 ] 00:21:39.581 } 00:21:39.581 ] 00:21:39.581 }' 00:21:39.581 14:26:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:39.581 14:26:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:39.581 14:26:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:39.581 14:26:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:39.581 14:26:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2601476 00:21:47.697 Initializing NVMe Controllers 00:21:47.697 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:47.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:47.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:47.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:47.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:47.697 Initialization complete. Launching workers. 00:21:47.697 ======================================================== 00:21:47.697 Latency(us) 00:21:47.697 Device Information : IOPS MiB/s Average min max 00:21:47.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7795.44 30.45 8210.07 1260.59 55241.24 00:21:47.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7031.34 27.47 9101.89 1596.81 54163.71 00:21:47.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8255.93 32.25 7764.44 1508.54 52167.52 00:21:47.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7490.14 29.26 8543.95 1610.59 53179.29 00:21:47.697 ======================================================== 00:21:47.697 Total : 30572.86 119.43 8376.64 1260.59 55241.24 00:21:47.697 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:47.697 rmmod nvme_tcp 00:21:47.697 rmmod nvme_fabrics 00:21:47.697 rmmod nvme_keyring 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2601224 ']' 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2601224 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2601224 ']' 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2601224 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2601224 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2601224' 00:21:47.697 killing process with pid 2601224 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2601224 00:21:47.697 14:26:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2601224 00:21:47.956 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:47.956 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:47.956 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:47.956 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.956 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.956 14:26:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.956 14:26:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.956 14:26:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.351 14:26:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:51.351 14:26:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:51.351 00:21:51.351 real 0m50.393s 00:21:51.351 user 2m49.131s 00:21:51.351 sys 0m9.345s 00:21:51.351 14:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:51.351 14:26:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:51.351 ************************************ 00:21:51.351 END TEST nvmf_perf_adq 00:21:51.351 ************************************ 00:21:51.351 14:26:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:51.351 14:26:42 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:51.351 14:26:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:51.351 14:26:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:51.351 14:26:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:51.351 ************************************ 00:21:51.351 START TEST nvmf_shutdown 00:21:51.351 ************************************ 00:21:51.351 14:26:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:51.351 * Looking for test storage... 00:21:51.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:51.351 14:26:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.351 14:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:51.351 14:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.351 14:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.351 14:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.351 14:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.351 14:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.351 14:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.351 14:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.351 14:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.351 14:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.351 14:26:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.351 14:26:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:51.352 ************************************ 00:21:51.352 START TEST nvmf_shutdown_tc1 00:21:51.352 ************************************ 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:51.352 14:26:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:56.624 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:56.624 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:56.624 Found net devices under 0000:86:00.0: cvl_0_0 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:56.624 Found net devices under 0000:86:00.1: cvl_0_1 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:56.624 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:56.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:21:56.625 00:21:56.625 --- 10.0.0.2 ping statistics --- 00:21:56.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.625 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:21:56.625 00:21:56.625 --- 10.0.0.1 ping statistics --- 00:21:56.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.625 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:56.625 14:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:56.625 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:56.625 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:56.625 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:56.625 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:56.625 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2606688 00:21:56.625 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2606688 00:21:56.625 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2606688 ']' 00:21:56.625 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.625 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.625 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.625 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.625 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:56.625 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:56.625 [2024-07-12 14:26:48.059123] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:21:56.625 [2024-07-12 14:26:48.059167] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.625 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.625 [2024-07-12 14:26:48.116095] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:56.625 [2024-07-12 14:26:48.196334] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.625 [2024-07-12 14:26:48.196368] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.625 [2024-07-12 14:26:48.196374] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.625 [2024-07-12 14:26:48.196386] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.625 [2024-07-12 14:26:48.196391] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.625 [2024-07-12 14:26:48.196431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.625 [2024-07-12 14:26:48.196520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:56.625 [2024-07-12 14:26:48.196625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.625 [2024-07-12 14:26:48.196626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:56.886 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:56.886 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:56.886 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:56.886 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:56.886 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:57.145 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:57.146 [2024-07-12 14:26:48.914374] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.146 14:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:57.146 Malloc1 00:21:57.146 [2024-07-12 14:26:49.010034] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.146 Malloc2 00:21:57.146 Malloc3 00:21:57.146 Malloc4 00:21:57.405 Malloc5 00:21:57.405 Malloc6 00:21:57.405 Malloc7 00:21:57.405 Malloc8 00:21:57.405 Malloc9 00:21:57.405 Malloc10 00:21:57.405 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.405 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:57.405 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:57.405 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2606974 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2606974 /var/tmp/bdevperf.sock 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2606974 ']' 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:57.665 { 00:21:57.665 "params": { 00:21:57.665 "name": "Nvme$subsystem", 00:21:57.665 "trtype": "$TEST_TRANSPORT", 00:21:57.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.665 "adrfam": "ipv4", 00:21:57.665 "trsvcid": "$NVMF_PORT", 00:21:57.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.665 "hdgst": ${hdgst:-false}, 00:21:57.665 "ddgst": ${ddgst:-false} 00:21:57.665 }, 00:21:57.665 "method": "bdev_nvme_attach_controller" 00:21:57.665 } 00:21:57.665 EOF 00:21:57.665 )") 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:57.665 { 00:21:57.665 "params": { 00:21:57.665 "name": "Nvme$subsystem", 00:21:57.665 "trtype": "$TEST_TRANSPORT", 00:21:57.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.665 "adrfam": "ipv4", 00:21:57.665 "trsvcid": "$NVMF_PORT", 00:21:57.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.665 "hdgst": ${hdgst:-false}, 00:21:57.665 "ddgst": ${ddgst:-false} 00:21:57.665 }, 00:21:57.665 "method": "bdev_nvme_attach_controller" 00:21:57.665 } 00:21:57.665 EOF 00:21:57.665 )") 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:57.665 { 00:21:57.665 "params": { 00:21:57.665 "name": "Nvme$subsystem", 00:21:57.665 "trtype": "$TEST_TRANSPORT", 00:21:57.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.665 "adrfam": "ipv4", 00:21:57.665 "trsvcid": "$NVMF_PORT", 00:21:57.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.665 "hdgst": ${hdgst:-false}, 00:21:57.665 "ddgst": ${ddgst:-false} 00:21:57.665 }, 00:21:57.665 "method": "bdev_nvme_attach_controller" 00:21:57.665 } 00:21:57.665 EOF 00:21:57.665 )") 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:57.665 { 00:21:57.665 "params": { 00:21:57.665 "name": "Nvme$subsystem", 00:21:57.665 "trtype": "$TEST_TRANSPORT", 00:21:57.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.665 "adrfam": "ipv4", 00:21:57.665 "trsvcid": "$NVMF_PORT", 00:21:57.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.665 "hdgst": ${hdgst:-false}, 00:21:57.665 "ddgst": ${ddgst:-false} 00:21:57.665 }, 00:21:57.665 "method": "bdev_nvme_attach_controller" 00:21:57.665 } 00:21:57.665 EOF 00:21:57.665 )") 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:57.665 { 00:21:57.665 "params": { 00:21:57.665 "name": "Nvme$subsystem", 00:21:57.665 "trtype": "$TEST_TRANSPORT", 00:21:57.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.665 "adrfam": "ipv4", 00:21:57.665 "trsvcid": "$NVMF_PORT", 00:21:57.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.665 "hdgst": ${hdgst:-false}, 00:21:57.665 "ddgst": ${ddgst:-false} 00:21:57.665 }, 00:21:57.665 "method": "bdev_nvme_attach_controller" 00:21:57.665 } 00:21:57.665 EOF 00:21:57.665 )") 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:57.665 { 00:21:57.665 "params": { 00:21:57.665 "name": "Nvme$subsystem", 00:21:57.665 "trtype": "$TEST_TRANSPORT", 00:21:57.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.665 "adrfam": "ipv4", 00:21:57.665 "trsvcid": "$NVMF_PORT", 00:21:57.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.665 "hdgst": ${hdgst:-false}, 00:21:57.665 "ddgst": ${ddgst:-false} 00:21:57.665 }, 00:21:57.665 "method": "bdev_nvme_attach_controller" 00:21:57.665 } 00:21:57.665 EOF 00:21:57.665 )") 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:57.665 [2024-07-12 14:26:49.476179] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:21:57.665 [2024-07-12 14:26:49.476225] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:57.665 { 00:21:57.665 "params": { 00:21:57.665 "name": "Nvme$subsystem", 00:21:57.665 "trtype": "$TEST_TRANSPORT", 00:21:57.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.665 "adrfam": "ipv4", 00:21:57.665 "trsvcid": "$NVMF_PORT", 00:21:57.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.665 "hdgst": ${hdgst:-false}, 00:21:57.665 "ddgst": ${ddgst:-false} 00:21:57.665 }, 00:21:57.665 "method": "bdev_nvme_attach_controller" 00:21:57.665 } 00:21:57.665 EOF 00:21:57.665 )") 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:57.665 { 00:21:57.665 "params": { 00:21:57.665 "name": "Nvme$subsystem", 00:21:57.665 "trtype": "$TEST_TRANSPORT", 00:21:57.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.665 "adrfam": "ipv4", 00:21:57.665 "trsvcid": "$NVMF_PORT", 00:21:57.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.665 "hdgst": ${hdgst:-false}, 00:21:57.665 "ddgst": ${ddgst:-false} 00:21:57.665 }, 00:21:57.665 "method": "bdev_nvme_attach_controller" 00:21:57.665 } 00:21:57.665 EOF 00:21:57.665 )") 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:57.665 { 00:21:57.665 "params": { 00:21:57.665 "name": "Nvme$subsystem", 00:21:57.665 "trtype": "$TEST_TRANSPORT", 00:21:57.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.665 "adrfam": "ipv4", 00:21:57.665 "trsvcid": "$NVMF_PORT", 00:21:57.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.665 "hdgst": ${hdgst:-false}, 00:21:57.665 "ddgst": ${ddgst:-false} 00:21:57.665 }, 00:21:57.665 "method": "bdev_nvme_attach_controller" 00:21:57.665 } 00:21:57.665 EOF 00:21:57.665 )") 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:57.665 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:57.665 { 00:21:57.665 "params": { 00:21:57.665 "name": "Nvme$subsystem", 00:21:57.665 "trtype": "$TEST_TRANSPORT", 00:21:57.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.665 "adrfam": "ipv4", 00:21:57.665 "trsvcid": "$NVMF_PORT", 00:21:57.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.665 "hdgst": ${hdgst:-false}, 00:21:57.665 "ddgst": ${ddgst:-false} 00:21:57.665 }, 00:21:57.665 "method": "bdev_nvme_attach_controller" 00:21:57.665 } 00:21:57.666 EOF 00:21:57.666 )") 00:21:57.666 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:57.666 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.666 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:57.666 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:57.666 14:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:57.666 "params": { 00:21:57.666 "name": "Nvme1", 00:21:57.666 "trtype": "tcp", 00:21:57.666 "traddr": "10.0.0.2", 00:21:57.666 "adrfam": "ipv4", 00:21:57.666 "trsvcid": "4420", 00:21:57.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:57.666 "hdgst": false, 00:21:57.666 "ddgst": false 00:21:57.666 }, 00:21:57.666 "method": "bdev_nvme_attach_controller" 00:21:57.666 },{ 00:21:57.666 "params": { 00:21:57.666 "name": "Nvme2", 00:21:57.666 "trtype": "tcp", 00:21:57.666 "traddr": "10.0.0.2", 00:21:57.666 "adrfam": "ipv4", 00:21:57.666 "trsvcid": "4420", 00:21:57.666 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:57.666 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:57.666 "hdgst": false, 00:21:57.666 "ddgst": false 00:21:57.666 }, 00:21:57.666 "method": "bdev_nvme_attach_controller" 00:21:57.666 },{ 00:21:57.666 "params": { 00:21:57.666 "name": "Nvme3", 00:21:57.666 "trtype": "tcp", 00:21:57.666 "traddr": "10.0.0.2", 00:21:57.666 "adrfam": "ipv4", 00:21:57.666 "trsvcid": "4420", 00:21:57.666 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:57.666 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:57.666 "hdgst": false, 00:21:57.666 "ddgst": false 00:21:57.666 }, 00:21:57.666 "method": "bdev_nvme_attach_controller" 00:21:57.666 },{ 00:21:57.666 "params": { 00:21:57.666 "name": "Nvme4", 00:21:57.666 "trtype": "tcp", 00:21:57.666 "traddr": "10.0.0.2", 00:21:57.666 "adrfam": "ipv4", 00:21:57.666 "trsvcid": "4420", 00:21:57.666 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:57.666 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:57.666 "hdgst": false, 00:21:57.666 "ddgst": false 00:21:57.666 }, 00:21:57.666 "method": "bdev_nvme_attach_controller" 00:21:57.666 },{ 00:21:57.666 "params": { 00:21:57.666 "name": "Nvme5", 00:21:57.666 "trtype": "tcp", 00:21:57.666 "traddr": "10.0.0.2", 00:21:57.666 "adrfam": "ipv4", 00:21:57.666 "trsvcid": "4420", 00:21:57.666 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:57.666 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:57.666 "hdgst": false, 00:21:57.666 "ddgst": false 00:21:57.666 }, 00:21:57.666 "method": "bdev_nvme_attach_controller" 00:21:57.666 },{ 00:21:57.666 "params": { 00:21:57.666 "name": "Nvme6", 00:21:57.666 "trtype": "tcp", 00:21:57.666 "traddr": "10.0.0.2", 00:21:57.666 "adrfam": "ipv4", 00:21:57.666 "trsvcid": "4420", 00:21:57.666 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:57.666 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:57.666 "hdgst": false, 00:21:57.666 "ddgst": false 00:21:57.666 }, 00:21:57.666 "method": "bdev_nvme_attach_controller" 00:21:57.666 },{ 00:21:57.666 "params": { 00:21:57.666 "name": "Nvme7", 00:21:57.666 "trtype": "tcp", 00:21:57.666 "traddr": "10.0.0.2", 00:21:57.666 "adrfam": "ipv4", 00:21:57.666 "trsvcid": "4420", 00:21:57.666 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:57.666 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:57.666 "hdgst": false, 00:21:57.666 "ddgst": false 00:21:57.666 }, 00:21:57.666 "method": "bdev_nvme_attach_controller" 00:21:57.666 },{ 00:21:57.666 "params": { 00:21:57.666 "name": "Nvme8", 00:21:57.666 "trtype": "tcp", 00:21:57.666 "traddr": "10.0.0.2", 00:21:57.666 "adrfam": "ipv4", 00:21:57.666 "trsvcid": "4420", 00:21:57.666 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:57.666 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:57.666 "hdgst": false, 00:21:57.666 "ddgst": false 00:21:57.666 }, 00:21:57.666 "method": "bdev_nvme_attach_controller" 00:21:57.666 },{ 00:21:57.666 "params": { 00:21:57.666 "name": "Nvme9", 00:21:57.666 "trtype": "tcp", 00:21:57.666 "traddr": "10.0.0.2", 00:21:57.666 "adrfam": "ipv4", 00:21:57.666 "trsvcid": "4420", 00:21:57.666 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:57.666 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:57.666 "hdgst": false, 00:21:57.666 "ddgst": false 00:21:57.666 }, 00:21:57.666 "method": "bdev_nvme_attach_controller" 00:21:57.666 },{ 00:21:57.666 "params": { 00:21:57.666 "name": "Nvme10", 00:21:57.666 "trtype": "tcp", 00:21:57.666 "traddr": "10.0.0.2", 00:21:57.666 "adrfam": "ipv4", 00:21:57.666 "trsvcid": "4420", 00:21:57.666 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:57.666 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:57.666 "hdgst": false, 00:21:57.666 "ddgst": false 00:21:57.666 }, 00:21:57.666 "method": "bdev_nvme_attach_controller" 00:21:57.666 }' 00:21:57.666 [2024-07-12 14:26:49.532282] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.666 [2024-07-12 14:26:49.605532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.046 14:26:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.046 14:26:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:59.046 14:26:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:59.046 14:26:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.046 14:26:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:59.046 14:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.046 14:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2606974 00:21:59.046 14:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:59.046 14:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:00.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2606974 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2606688 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.425 { 00:22:00.425 "params": { 00:22:00.425 "name": "Nvme$subsystem", 00:22:00.425 "trtype": "$TEST_TRANSPORT", 00:22:00.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.425 "adrfam": "ipv4", 00:22:00.425 "trsvcid": "$NVMF_PORT", 00:22:00.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.425 "hdgst": ${hdgst:-false}, 00:22:00.425 "ddgst": ${ddgst:-false} 00:22:00.425 }, 00:22:00.425 "method": "bdev_nvme_attach_controller" 00:22:00.425 } 00:22:00.425 EOF 00:22:00.425 )") 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.425 { 00:22:00.425 "params": { 00:22:00.425 "name": "Nvme$subsystem", 00:22:00.425 "trtype": "$TEST_TRANSPORT", 00:22:00.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.425 "adrfam": "ipv4", 00:22:00.425 "trsvcid": "$NVMF_PORT", 00:22:00.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.425 "hdgst": ${hdgst:-false}, 00:22:00.425 "ddgst": ${ddgst:-false} 00:22:00.425 }, 00:22:00.425 "method": "bdev_nvme_attach_controller" 00:22:00.425 } 00:22:00.425 EOF 00:22:00.425 )") 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.425 { 00:22:00.425 "params": { 00:22:00.425 "name": "Nvme$subsystem", 00:22:00.425 "trtype": "$TEST_TRANSPORT", 00:22:00.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.425 "adrfam": "ipv4", 00:22:00.425 "trsvcid": "$NVMF_PORT", 00:22:00.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.425 "hdgst": ${hdgst:-false}, 00:22:00.425 "ddgst": ${ddgst:-false} 00:22:00.425 }, 00:22:00.425 "method": "bdev_nvme_attach_controller" 00:22:00.425 } 00:22:00.425 EOF 00:22:00.425 )") 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.425 { 00:22:00.425 "params": { 00:22:00.425 "name": "Nvme$subsystem", 00:22:00.425 "trtype": "$TEST_TRANSPORT", 00:22:00.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.425 "adrfam": "ipv4", 00:22:00.425 "trsvcid": "$NVMF_PORT", 00:22:00.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.425 "hdgst": ${hdgst:-false}, 00:22:00.425 "ddgst": ${ddgst:-false} 00:22:00.425 }, 00:22:00.425 "method": "bdev_nvme_attach_controller" 00:22:00.425 } 00:22:00.425 EOF 00:22:00.425 )") 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.425 { 00:22:00.425 "params": { 00:22:00.425 "name": "Nvme$subsystem", 00:22:00.425 "trtype": "$TEST_TRANSPORT", 00:22:00.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.425 "adrfam": "ipv4", 00:22:00.425 "trsvcid": "$NVMF_PORT", 00:22:00.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.425 "hdgst": ${hdgst:-false}, 00:22:00.425 "ddgst": ${ddgst:-false} 00:22:00.425 }, 00:22:00.425 "method": "bdev_nvme_attach_controller" 00:22:00.425 } 00:22:00.425 EOF 00:22:00.425 )") 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.425 { 00:22:00.425 "params": { 00:22:00.425 "name": "Nvme$subsystem", 00:22:00.425 "trtype": "$TEST_TRANSPORT", 00:22:00.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.425 "adrfam": "ipv4", 00:22:00.425 "trsvcid": "$NVMF_PORT", 00:22:00.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.425 "hdgst": ${hdgst:-false}, 00:22:00.425 "ddgst": ${ddgst:-false} 00:22:00.425 }, 00:22:00.425 "method": "bdev_nvme_attach_controller" 00:22:00.425 } 00:22:00.425 EOF 00:22:00.425 )") 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.425 { 00:22:00.425 "params": { 00:22:00.425 "name": "Nvme$subsystem", 00:22:00.425 "trtype": "$TEST_TRANSPORT", 00:22:00.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.425 "adrfam": "ipv4", 00:22:00.425 "trsvcid": "$NVMF_PORT", 00:22:00.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.425 "hdgst": ${hdgst:-false}, 00:22:00.425 "ddgst": ${ddgst:-false} 00:22:00.425 }, 00:22:00.425 "method": "bdev_nvme_attach_controller" 00:22:00.425 } 00:22:00.425 EOF 00:22:00.425 )") 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.425 [2024-07-12 14:26:52.057404] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:22:00.425 [2024-07-12 14:26:52.057454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607456 ] 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.425 { 00:22:00.425 "params": { 00:22:00.425 "name": "Nvme$subsystem", 00:22:00.425 "trtype": "$TEST_TRANSPORT", 00:22:00.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.425 "adrfam": "ipv4", 00:22:00.425 "trsvcid": "$NVMF_PORT", 00:22:00.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.425 "hdgst": ${hdgst:-false}, 00:22:00.425 "ddgst": ${ddgst:-false} 00:22:00.425 }, 00:22:00.425 "method": "bdev_nvme_attach_controller" 00:22:00.425 } 00:22:00.425 EOF 00:22:00.425 )") 00:22:00.425 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.426 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.426 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.426 { 00:22:00.426 "params": { 00:22:00.426 "name": "Nvme$subsystem", 00:22:00.426 "trtype": "$TEST_TRANSPORT", 00:22:00.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.426 "adrfam": "ipv4", 00:22:00.426 "trsvcid": "$NVMF_PORT", 00:22:00.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.426 "hdgst": ${hdgst:-false}, 00:22:00.426 "ddgst": ${ddgst:-false} 00:22:00.426 }, 00:22:00.426 "method": "bdev_nvme_attach_controller" 00:22:00.426 } 00:22:00.426 EOF 00:22:00.426 )") 00:22:00.426 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.426 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.426 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.426 { 00:22:00.426 "params": { 00:22:00.426 "name": "Nvme$subsystem", 00:22:00.426 "trtype": "$TEST_TRANSPORT", 00:22:00.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.426 "adrfam": "ipv4", 00:22:00.426 "trsvcid": "$NVMF_PORT", 00:22:00.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.426 "hdgst": ${hdgst:-false}, 00:22:00.426 "ddgst": ${ddgst:-false} 00:22:00.426 }, 00:22:00.426 "method": "bdev_nvme_attach_controller" 00:22:00.426 } 00:22:00.426 EOF 00:22:00.426 )") 00:22:00.426 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.426 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:00.426 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:00.426 14:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:00.426 "params": { 00:22:00.426 "name": "Nvme1", 00:22:00.426 "trtype": "tcp", 00:22:00.426 "traddr": "10.0.0.2", 00:22:00.426 "adrfam": "ipv4", 00:22:00.426 "trsvcid": "4420", 00:22:00.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.426 "hdgst": false, 00:22:00.426 "ddgst": false 00:22:00.426 }, 00:22:00.426 "method": "bdev_nvme_attach_controller" 00:22:00.426 },{ 00:22:00.426 "params": { 00:22:00.426 "name": "Nvme2", 00:22:00.426 "trtype": "tcp", 00:22:00.426 "traddr": "10.0.0.2", 00:22:00.426 "adrfam": "ipv4", 00:22:00.426 "trsvcid": "4420", 00:22:00.426 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:00.426 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:00.426 "hdgst": false, 00:22:00.426 "ddgst": false 00:22:00.426 }, 00:22:00.426 "method": "bdev_nvme_attach_controller" 00:22:00.426 },{ 00:22:00.426 "params": { 00:22:00.426 "name": "Nvme3", 00:22:00.426 "trtype": "tcp", 00:22:00.426 "traddr": "10.0.0.2", 00:22:00.426 "adrfam": "ipv4", 00:22:00.426 "trsvcid": "4420", 00:22:00.426 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:00.426 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:00.426 "hdgst": false, 00:22:00.426 "ddgst": false 00:22:00.426 }, 00:22:00.426 "method": "bdev_nvme_attach_controller" 00:22:00.426 },{ 00:22:00.426 "params": { 00:22:00.426 "name": "Nvme4", 00:22:00.426 "trtype": "tcp", 00:22:00.426 "traddr": "10.0.0.2", 00:22:00.426 "adrfam": "ipv4", 00:22:00.426 "trsvcid": "4420", 00:22:00.426 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:00.426 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:00.426 "hdgst": false, 00:22:00.426 "ddgst": false 00:22:00.426 }, 00:22:00.426 "method": "bdev_nvme_attach_controller" 00:22:00.426 },{ 00:22:00.426 "params": { 00:22:00.426 "name": "Nvme5", 00:22:00.426 "trtype": "tcp", 00:22:00.426 "traddr": "10.0.0.2", 00:22:00.426 "adrfam": "ipv4", 00:22:00.426 "trsvcid": "4420", 00:22:00.426 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:00.426 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:00.426 "hdgst": false, 00:22:00.426 "ddgst": false 00:22:00.426 }, 00:22:00.426 "method": "bdev_nvme_attach_controller" 00:22:00.426 },{ 00:22:00.426 "params": { 00:22:00.426 "name": "Nvme6", 00:22:00.426 "trtype": "tcp", 00:22:00.426 "traddr": "10.0.0.2", 00:22:00.426 "adrfam": "ipv4", 00:22:00.426 "trsvcid": "4420", 00:22:00.426 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:00.426 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:00.426 "hdgst": false, 00:22:00.426 "ddgst": false 00:22:00.426 }, 00:22:00.426 "method": "bdev_nvme_attach_controller" 00:22:00.426 },{ 00:22:00.426 "params": { 00:22:00.426 "name": "Nvme7", 00:22:00.426 "trtype": "tcp", 00:22:00.426 "traddr": "10.0.0.2", 00:22:00.426 "adrfam": "ipv4", 00:22:00.426 "trsvcid": "4420", 00:22:00.426 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:00.426 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:00.426 "hdgst": false, 00:22:00.426 "ddgst": false 00:22:00.426 }, 00:22:00.426 "method": "bdev_nvme_attach_controller" 00:22:00.426 },{ 00:22:00.426 "params": { 00:22:00.426 "name": "Nvme8", 00:22:00.426 "trtype": "tcp", 00:22:00.426 "traddr": "10.0.0.2", 00:22:00.426 "adrfam": "ipv4", 00:22:00.426 "trsvcid": "4420", 00:22:00.426 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:00.426 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:00.426 "hdgst": false, 00:22:00.426 "ddgst": false 00:22:00.426 }, 00:22:00.426 "method": "bdev_nvme_attach_controller" 00:22:00.426 },{ 00:22:00.426 "params": { 00:22:00.426 "name": "Nvme9", 00:22:00.426 "trtype": "tcp", 00:22:00.426 "traddr": "10.0.0.2", 00:22:00.426 "adrfam": "ipv4", 00:22:00.426 "trsvcid": "4420", 00:22:00.426 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:00.426 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:00.426 "hdgst": false, 00:22:00.426 "ddgst": false 00:22:00.426 }, 00:22:00.426 "method": "bdev_nvme_attach_controller" 00:22:00.426 },{ 00:22:00.426 "params": { 00:22:00.426 "name": "Nvme10", 00:22:00.426 "trtype": "tcp", 00:22:00.426 "traddr": "10.0.0.2", 00:22:00.426 "adrfam": "ipv4", 00:22:00.426 "trsvcid": "4420", 00:22:00.426 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:00.426 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:00.426 "hdgst": false, 00:22:00.426 "ddgst": false 00:22:00.426 }, 00:22:00.426 "method": "bdev_nvme_attach_controller" 00:22:00.426 }' 00:22:00.426 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.426 [2024-07-12 14:26:52.113311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.427 [2024-07-12 14:26:52.186927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.805 Running I/O for 1 seconds... 00:22:03.183 00:22:03.184 Latency(us) 00:22:03.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.184 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:03.184 Verification LBA range: start 0x0 length 0x400 00:22:03.184 Nvme1n1 : 1.13 282.91 17.68 0.00 0.00 223790.48 21769.35 206979.78 00:22:03.184 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:03.184 Verification LBA range: start 0x0 length 0x400 00:22:03.184 Nvme2n1 : 1.13 283.23 17.70 0.00 0.00 220828.54 14702.86 217921.45 00:22:03.184 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:03.184 Verification LBA range: start 0x0 length 0x400 00:22:03.184 Nvme3n1 : 1.10 291.95 18.25 0.00 0.00 210950.59 23934.89 207891.59 00:22:03.184 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:03.184 Verification LBA range: start 0x0 length 0x400 00:22:03.184 Nvme4n1 : 1.10 291.48 18.22 0.00 0.00 208107.34 14075.99 217009.64 00:22:03.184 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:03.184 Verification LBA range: start 0x0 length 0x400 00:22:03.184 Nvme5n1 : 1.14 279.88 17.49 0.00 0.00 214033.45 19033.93 218833.25 00:22:03.184 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:03.184 Verification LBA range: start 0x0 length 0x400 00:22:03.184 Nvme6n1 : 1.09 235.34 14.71 0.00 0.00 249850.43 16640.45 218833.25 00:22:03.184 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:03.184 Verification LBA range: start 0x0 length 0x400 00:22:03.184 Nvme7n1 : 1.14 281.85 17.62 0.00 0.00 206149.36 15158.76 214274.23 00:22:03.184 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:03.184 Verification LBA range: start 0x0 length 0x400 00:22:03.184 Nvme8n1 : 1.14 284.25 17.77 0.00 0.00 201292.08 2037.31 217921.45 00:22:03.184 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:03.184 Verification LBA range: start 0x0 length 0x400 00:22:03.184 Nvme9n1 : 1.15 282.35 17.65 0.00 0.00 199720.12 1474.56 220656.86 00:22:03.184 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:03.184 Verification LBA range: start 0x0 length 0x400 00:22:03.184 Nvme10n1 : 1.15 278.05 17.38 0.00 0.00 199891.21 14930.81 238892.97 00:22:03.184 =================================================================================================================== 00:22:03.184 Total : 2791.29 174.46 0.00 0.00 212687.65 1474.56 238892.97 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:03.184 rmmod nvme_tcp 00:22:03.184 rmmod nvme_fabrics 00:22:03.184 rmmod nvme_keyring 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2606688 ']' 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2606688 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2606688 ']' 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2606688 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2606688 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2606688' 00:22:03.184 killing process with pid 2606688 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2606688 00:22:03.184 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2606688 00:22:03.752 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:03.752 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:03.752 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:03.752 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:03.752 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:03.752 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.752 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.752 14:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.659 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:05.659 00:22:05.659 real 0m14.564s 00:22:05.659 user 0m34.636s 00:22:05.659 sys 0m5.112s 00:22:05.659 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:05.659 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.659 ************************************ 00:22:05.659 END TEST nvmf_shutdown_tc1 00:22:05.659 ************************************ 00:22:05.659 14:26:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:05.659 14:26:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:05.659 14:26:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:05.659 14:26:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:05.659 14:26:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:05.919 ************************************ 00:22:05.919 START TEST nvmf_shutdown_tc2 00:22:05.919 ************************************ 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.919 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:05.920 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:05.920 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:05.920 Found net devices under 0000:86:00.0: cvl_0_0 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:05.920 Found net devices under 0000:86:00.1: cvl_0_1 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:05.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:22:05.920 00:22:05.920 --- 10.0.0.2 ping statistics --- 00:22:05.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.920 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:22:05.920 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:22:06.180 00:22:06.180 --- 10.0.0.1 ping statistics --- 00:22:06.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.180 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2608479 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2608479 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2608479 ']' 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:06.180 14:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:06.180 [2024-07-12 14:26:58.018667] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:22:06.180 [2024-07-12 14:26:58.018714] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.180 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.180 [2024-07-12 14:26:58.075237] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.180 [2024-07-12 14:26:58.154984] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.180 [2024-07-12 14:26:58.155019] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.180 [2024-07-12 14:26:58.155026] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.180 [2024-07-12 14:26:58.155032] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.180 [2024-07-12 14:26:58.155037] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.180 [2024-07-12 14:26:58.155140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.180 [2024-07-12 14:26:58.155158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.180 [2024-07-12 14:26:58.155268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.180 [2024-07-12 14:26:58.155269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:07.118 [2024-07-12 14:26:58.853183] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.118 14:26:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:07.118 Malloc1 00:22:07.118 [2024-07-12 14:26:58.949156] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.118 Malloc2 00:22:07.118 Malloc3 00:22:07.118 Malloc4 00:22:07.118 Malloc5 00:22:07.377 Malloc6 00:22:07.377 Malloc7 00:22:07.377 Malloc8 00:22:07.377 Malloc9 00:22:07.377 Malloc10 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2608754 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2608754 /var/tmp/bdevperf.sock 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2608754 ']' 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.377 { 00:22:07.377 "params": { 00:22:07.377 "name": "Nvme$subsystem", 00:22:07.377 "trtype": "$TEST_TRANSPORT", 00:22:07.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.377 "adrfam": "ipv4", 00:22:07.377 "trsvcid": "$NVMF_PORT", 00:22:07.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.377 "hdgst": ${hdgst:-false}, 00:22:07.377 "ddgst": ${ddgst:-false} 00:22:07.377 }, 00:22:07.377 "method": "bdev_nvme_attach_controller" 00:22:07.377 } 00:22:07.377 EOF 00:22:07.377 )") 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.377 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.377 { 00:22:07.377 "params": { 00:22:07.377 "name": "Nvme$subsystem", 00:22:07.377 "trtype": "$TEST_TRANSPORT", 00:22:07.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.377 "adrfam": "ipv4", 00:22:07.377 "trsvcid": "$NVMF_PORT", 00:22:07.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.377 "hdgst": ${hdgst:-false}, 00:22:07.377 "ddgst": ${ddgst:-false} 00:22:07.377 }, 00:22:07.377 "method": "bdev_nvme_attach_controller" 00:22:07.377 } 00:22:07.377 EOF 00:22:07.377 )") 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.637 { 00:22:07.637 "params": { 00:22:07.637 "name": "Nvme$subsystem", 00:22:07.637 "trtype": "$TEST_TRANSPORT", 00:22:07.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.637 "adrfam": "ipv4", 00:22:07.637 "trsvcid": "$NVMF_PORT", 00:22:07.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.637 "hdgst": ${hdgst:-false}, 00:22:07.637 "ddgst": ${ddgst:-false} 00:22:07.637 }, 00:22:07.637 "method": "bdev_nvme_attach_controller" 00:22:07.637 } 00:22:07.637 EOF 00:22:07.637 )") 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.637 { 00:22:07.637 "params": { 00:22:07.637 "name": "Nvme$subsystem", 00:22:07.637 "trtype": "$TEST_TRANSPORT", 00:22:07.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.637 "adrfam": "ipv4", 00:22:07.637 "trsvcid": "$NVMF_PORT", 00:22:07.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.637 "hdgst": ${hdgst:-false}, 00:22:07.637 "ddgst": ${ddgst:-false} 00:22:07.637 }, 00:22:07.637 "method": "bdev_nvme_attach_controller" 00:22:07.637 } 00:22:07.637 EOF 00:22:07.637 )") 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.637 { 00:22:07.637 "params": { 00:22:07.637 "name": "Nvme$subsystem", 00:22:07.637 "trtype": "$TEST_TRANSPORT", 00:22:07.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.637 "adrfam": "ipv4", 00:22:07.637 "trsvcid": "$NVMF_PORT", 00:22:07.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.637 "hdgst": ${hdgst:-false}, 00:22:07.637 "ddgst": ${ddgst:-false} 00:22:07.637 }, 00:22:07.637 "method": "bdev_nvme_attach_controller" 00:22:07.637 } 00:22:07.637 EOF 00:22:07.637 )") 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.637 { 00:22:07.637 "params": { 00:22:07.637 "name": "Nvme$subsystem", 00:22:07.637 "trtype": "$TEST_TRANSPORT", 00:22:07.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.637 "adrfam": "ipv4", 00:22:07.637 "trsvcid": "$NVMF_PORT", 00:22:07.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.637 "hdgst": ${hdgst:-false}, 00:22:07.637 "ddgst": ${ddgst:-false} 00:22:07.637 }, 00:22:07.637 "method": "bdev_nvme_attach_controller" 00:22:07.637 } 00:22:07.637 EOF 00:22:07.637 )") 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.637 { 00:22:07.637 "params": { 00:22:07.637 "name": "Nvme$subsystem", 00:22:07.637 "trtype": "$TEST_TRANSPORT", 00:22:07.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.637 "adrfam": "ipv4", 00:22:07.637 "trsvcid": "$NVMF_PORT", 00:22:07.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.637 "hdgst": ${hdgst:-false}, 00:22:07.637 "ddgst": ${ddgst:-false} 00:22:07.637 }, 00:22:07.637 "method": "bdev_nvme_attach_controller" 00:22:07.637 } 00:22:07.637 EOF 00:22:07.637 )") 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.637 [2024-07-12 14:26:59.420785] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:22:07.637 [2024-07-12 14:26:59.420831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2608754 ] 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.637 { 00:22:07.637 "params": { 00:22:07.637 "name": "Nvme$subsystem", 00:22:07.637 "trtype": "$TEST_TRANSPORT", 00:22:07.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.637 "adrfam": "ipv4", 00:22:07.637 "trsvcid": "$NVMF_PORT", 00:22:07.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.637 "hdgst": ${hdgst:-false}, 00:22:07.637 "ddgst": ${ddgst:-false} 00:22:07.637 }, 00:22:07.637 "method": "bdev_nvme_attach_controller" 00:22:07.637 } 00:22:07.637 EOF 00:22:07.637 )") 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.637 { 00:22:07.637 "params": { 00:22:07.637 "name": "Nvme$subsystem", 00:22:07.637 "trtype": "$TEST_TRANSPORT", 00:22:07.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.637 "adrfam": "ipv4", 00:22:07.637 "trsvcid": "$NVMF_PORT", 00:22:07.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.637 "hdgst": ${hdgst:-false}, 00:22:07.637 "ddgst": ${ddgst:-false} 00:22:07.637 }, 00:22:07.637 "method": "bdev_nvme_attach_controller" 00:22:07.637 } 00:22:07.637 EOF 00:22:07.637 )") 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.637 { 00:22:07.637 "params": { 00:22:07.637 "name": "Nvme$subsystem", 00:22:07.637 "trtype": "$TEST_TRANSPORT", 00:22:07.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.637 "adrfam": "ipv4", 00:22:07.637 "trsvcid": "$NVMF_PORT", 00:22:07.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.637 "hdgst": ${hdgst:-false}, 00:22:07.637 "ddgst": ${ddgst:-false} 00:22:07.637 }, 00:22:07.637 "method": "bdev_nvme_attach_controller" 00:22:07.637 } 00:22:07.637 EOF 00:22:07.637 )") 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:07.637 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:07.637 14:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:07.637 "params": { 00:22:07.637 "name": "Nvme1", 00:22:07.637 "trtype": "tcp", 00:22:07.637 "traddr": "10.0.0.2", 00:22:07.637 "adrfam": "ipv4", 00:22:07.637 "trsvcid": "4420", 00:22:07.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:07.637 "hdgst": false, 00:22:07.637 "ddgst": false 00:22:07.637 }, 00:22:07.637 "method": "bdev_nvme_attach_controller" 00:22:07.637 },{ 00:22:07.637 "params": { 00:22:07.637 "name": "Nvme2", 00:22:07.637 "trtype": "tcp", 00:22:07.637 "traddr": "10.0.0.2", 00:22:07.637 "adrfam": "ipv4", 00:22:07.637 "trsvcid": "4420", 00:22:07.637 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:07.637 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:07.637 "hdgst": false, 00:22:07.637 "ddgst": false 00:22:07.637 }, 00:22:07.637 "method": "bdev_nvme_attach_controller" 00:22:07.637 },{ 00:22:07.637 "params": { 00:22:07.637 "name": "Nvme3", 00:22:07.637 "trtype": "tcp", 00:22:07.637 "traddr": "10.0.0.2", 00:22:07.637 "adrfam": "ipv4", 00:22:07.637 "trsvcid": "4420", 00:22:07.637 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:07.637 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:07.637 "hdgst": false, 00:22:07.637 "ddgst": false 00:22:07.637 }, 00:22:07.637 "method": "bdev_nvme_attach_controller" 00:22:07.637 },{ 00:22:07.637 "params": { 00:22:07.637 "name": "Nvme4", 00:22:07.637 "trtype": "tcp", 00:22:07.637 "traddr": "10.0.0.2", 00:22:07.637 "adrfam": "ipv4", 00:22:07.637 "trsvcid": "4420", 00:22:07.637 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:07.637 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:07.637 "hdgst": false, 00:22:07.637 "ddgst": false 00:22:07.637 }, 00:22:07.637 "method": "bdev_nvme_attach_controller" 00:22:07.637 },{ 00:22:07.637 "params": { 00:22:07.637 "name": "Nvme5", 00:22:07.637 "trtype": "tcp", 00:22:07.637 "traddr": "10.0.0.2", 00:22:07.637 "adrfam": "ipv4", 00:22:07.637 "trsvcid": "4420", 00:22:07.637 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:07.637 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:07.637 "hdgst": false, 00:22:07.637 "ddgst": false 00:22:07.637 }, 00:22:07.637 "method": "bdev_nvme_attach_controller" 00:22:07.637 },{ 00:22:07.637 "params": { 00:22:07.637 "name": "Nvme6", 00:22:07.637 "trtype": "tcp", 00:22:07.637 "traddr": "10.0.0.2", 00:22:07.637 "adrfam": "ipv4", 00:22:07.637 "trsvcid": "4420", 00:22:07.637 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:07.637 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:07.637 "hdgst": false, 00:22:07.637 "ddgst": false 00:22:07.637 }, 00:22:07.638 "method": "bdev_nvme_attach_controller" 00:22:07.638 },{ 00:22:07.638 "params": { 00:22:07.638 "name": "Nvme7", 00:22:07.638 "trtype": "tcp", 00:22:07.638 "traddr": "10.0.0.2", 00:22:07.638 "adrfam": "ipv4", 00:22:07.638 "trsvcid": "4420", 00:22:07.638 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:07.638 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:07.638 "hdgst": false, 00:22:07.638 "ddgst": false 00:22:07.638 }, 00:22:07.638 "method": "bdev_nvme_attach_controller" 00:22:07.638 },{ 00:22:07.638 "params": { 00:22:07.638 "name": "Nvme8", 00:22:07.638 "trtype": "tcp", 00:22:07.638 "traddr": "10.0.0.2", 00:22:07.638 "adrfam": "ipv4", 00:22:07.638 "trsvcid": "4420", 00:22:07.638 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:07.638 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:07.638 "hdgst": false, 00:22:07.638 "ddgst": false 00:22:07.638 }, 00:22:07.638 "method": "bdev_nvme_attach_controller" 00:22:07.638 },{ 00:22:07.638 "params": { 00:22:07.638 "name": "Nvme9", 00:22:07.638 "trtype": "tcp", 00:22:07.638 "traddr": "10.0.0.2", 00:22:07.638 "adrfam": "ipv4", 00:22:07.638 "trsvcid": "4420", 00:22:07.638 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:07.638 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:07.638 "hdgst": false, 00:22:07.638 "ddgst": false 00:22:07.638 }, 00:22:07.638 "method": "bdev_nvme_attach_controller" 00:22:07.638 },{ 00:22:07.638 "params": { 00:22:07.638 "name": "Nvme10", 00:22:07.638 "trtype": "tcp", 00:22:07.638 "traddr": "10.0.0.2", 00:22:07.638 "adrfam": "ipv4", 00:22:07.638 "trsvcid": "4420", 00:22:07.638 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:07.638 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:07.638 "hdgst": false, 00:22:07.638 "ddgst": false 00:22:07.638 }, 00:22:07.638 "method": "bdev_nvme_attach_controller" 00:22:07.638 }' 00:22:07.638 [2024-07-12 14:26:59.477789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.638 [2024-07-12 14:26:59.550853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.542 Running I/O for 10 seconds... 00:22:09.542 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.542 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:22:09.542 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:09.542 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.542 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:09.543 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.802 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.802 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:09.802 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:09.802 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:10.061 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:10.061 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:10.061 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:10.061 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:10.061 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.061 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.061 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.062 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:10.062 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:10.062 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:10.062 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:10.062 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:10.062 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2608754 00:22:10.062 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2608754 ']' 00:22:10.062 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2608754 00:22:10.062 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:22:10.062 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.062 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2608754 00:22:10.062 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:10.062 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:10.062 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2608754' 00:22:10.062 killing process with pid 2608754 00:22:10.062 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2608754 00:22:10.062 14:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2608754 00:22:10.062 Received shutdown signal, test time was about 0.936456 seconds 00:22:10.062 00:22:10.062 Latency(us) 00:22:10.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.062 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.062 Verification LBA range: start 0x0 length 0x400 00:22:10.062 Nvme1n1 : 0.92 279.73 17.48 0.00 0.00 226218.74 18008.15 221568.67 00:22:10.062 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.062 Verification LBA range: start 0x0 length 0x400 00:22:10.062 Nvme2n1 : 0.91 281.59 17.60 0.00 0.00 220947.59 17210.32 220656.86 00:22:10.062 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.062 Verification LBA range: start 0x0 length 0x400 00:22:10.062 Nvme3n1 : 0.91 282.20 17.64 0.00 0.00 216419.28 14588.88 216097.84 00:22:10.062 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.062 Verification LBA range: start 0x0 length 0x400 00:22:10.062 Nvme4n1 : 0.91 353.76 22.11 0.00 0.00 168921.09 15956.59 214274.23 00:22:10.062 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.062 Verification LBA range: start 0x0 length 0x400 00:22:10.062 Nvme5n1 : 0.90 291.33 18.21 0.00 0.00 200624.30 3219.81 202420.76 00:22:10.062 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.062 Verification LBA range: start 0x0 length 0x400 00:22:10.062 Nvme6n1 : 0.92 278.88 17.43 0.00 0.00 207200.39 16982.37 227039.50 00:22:10.062 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.062 Verification LBA range: start 0x0 length 0x400 00:22:10.062 Nvme7n1 : 0.94 278.91 17.43 0.00 0.00 193959.92 13905.03 215186.03 00:22:10.062 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.062 Verification LBA range: start 0x0 length 0x400 00:22:10.062 Nvme8n1 : 0.90 284.15 17.76 0.00 0.00 194830.36 13905.03 214274.23 00:22:10.062 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.062 Verification LBA range: start 0x0 length 0x400 00:22:10.062 Nvme9n1 : 0.88 218.73 13.67 0.00 0.00 246934.48 21655.37 217009.64 00:22:10.062 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.062 Verification LBA range: start 0x0 length 0x400 00:22:10.062 Nvme10n1 : 0.89 216.14 13.51 0.00 0.00 245161.03 21313.45 240716.58 00:22:10.062 =================================================================================================================== 00:22:10.062 Total : 2765.41 172.84 0.00 0.00 209178.21 3219.81 240716.58 00:22:10.321 14:27:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:11.256 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2608479 00:22:11.256 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:11.256 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:11.256 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:11.256 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:11.256 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:11.256 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:11.256 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:11.256 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:11.256 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:11.256 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:11.256 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:11.256 rmmod nvme_tcp 00:22:11.256 rmmod nvme_fabrics 00:22:11.515 rmmod nvme_keyring 00:22:11.515 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:11.515 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:11.515 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:11.515 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2608479 ']' 00:22:11.515 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2608479 00:22:11.515 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2608479 ']' 00:22:11.515 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2608479 00:22:11.515 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:22:11.515 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:11.515 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2608479 00:22:11.515 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:11.515 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:11.515 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2608479' 00:22:11.515 killing process with pid 2608479 00:22:11.515 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2608479 00:22:11.516 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2608479 00:22:11.774 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:11.774 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:11.774 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:11.774 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.774 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:11.774 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.774 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.774 14:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.371 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:14.371 00:22:14.371 real 0m8.116s 00:22:14.371 user 0m24.930s 00:22:14.371 sys 0m1.332s 00:22:14.371 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:14.371 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.371 ************************************ 00:22:14.371 END TEST nvmf_shutdown_tc2 00:22:14.371 ************************************ 00:22:14.371 14:27:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:14.371 14:27:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:14.371 14:27:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:14.371 14:27:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:14.371 14:27:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:14.371 ************************************ 00:22:14.371 START TEST nvmf_shutdown_tc3 00:22:14.371 ************************************ 00:22:14.371 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:22:14.371 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:14.371 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:14.372 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:14.372 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:14.372 Found net devices under 0000:86:00.0: cvl_0_0 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:14.372 Found net devices under 0000:86:00.1: cvl_0_1 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.372 14:27:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:14.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:22:14.372 00:22:14.372 --- 10.0.0.2 ping statistics --- 00:22:14.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.372 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:22:14.372 00:22:14.372 --- 10.0.0.1 ping statistics --- 00:22:14.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.372 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2610147 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2610147 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2610147 ']' 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:14.372 14:27:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:14.372 [2024-07-12 14:27:06.216145] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:22:14.372 [2024-07-12 14:27:06.216189] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.372 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.372 [2024-07-12 14:27:06.273859] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.372 [2024-07-12 14:27:06.352692] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.372 [2024-07-12 14:27:06.352728] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.372 [2024-07-12 14:27:06.352735] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.372 [2024-07-12 14:27:06.352741] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.372 [2024-07-12 14:27:06.352747] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.372 [2024-07-12 14:27:06.352785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.372 [2024-07-12 14:27:06.352876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.372 [2024-07-12 14:27:06.352982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.372 [2024-07-12 14:27:06.352983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.315 [2024-07-12 14:27:07.064204] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.315 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.315 Malloc1 00:22:15.315 [2024-07-12 14:27:07.160219] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.315 Malloc2 00:22:15.315 Malloc3 00:22:15.315 Malloc4 00:22:15.315 Malloc5 00:22:15.574 Malloc6 00:22:15.574 Malloc7 00:22:15.574 Malloc8 00:22:15.574 Malloc9 00:22:15.574 Malloc10 00:22:15.574 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.574 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:15.574 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:15.574 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2610428 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2610428 /var/tmp/bdevperf.sock 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2610428 ']' 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.833 { 00:22:15.833 "params": { 00:22:15.833 "name": "Nvme$subsystem", 00:22:15.833 "trtype": "$TEST_TRANSPORT", 00:22:15.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.833 "adrfam": "ipv4", 00:22:15.833 "trsvcid": "$NVMF_PORT", 00:22:15.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.833 "hdgst": ${hdgst:-false}, 00:22:15.833 "ddgst": ${ddgst:-false} 00:22:15.833 }, 00:22:15.833 "method": "bdev_nvme_attach_controller" 00:22:15.833 } 00:22:15.833 EOF 00:22:15.833 )") 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.833 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.833 { 00:22:15.833 "params": { 00:22:15.833 "name": "Nvme$subsystem", 00:22:15.833 "trtype": "$TEST_TRANSPORT", 00:22:15.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.833 "adrfam": "ipv4", 00:22:15.833 "trsvcid": "$NVMF_PORT", 00:22:15.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.833 "hdgst": ${hdgst:-false}, 00:22:15.833 "ddgst": ${ddgst:-false} 00:22:15.833 }, 00:22:15.834 "method": "bdev_nvme_attach_controller" 00:22:15.834 } 00:22:15.834 EOF 00:22:15.834 )") 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.834 { 00:22:15.834 "params": { 00:22:15.834 "name": "Nvme$subsystem", 00:22:15.834 "trtype": "$TEST_TRANSPORT", 00:22:15.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.834 "adrfam": "ipv4", 00:22:15.834 "trsvcid": "$NVMF_PORT", 00:22:15.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.834 "hdgst": ${hdgst:-false}, 00:22:15.834 "ddgst": ${ddgst:-false} 00:22:15.834 }, 00:22:15.834 "method": "bdev_nvme_attach_controller" 00:22:15.834 } 00:22:15.834 EOF 00:22:15.834 )") 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.834 { 00:22:15.834 "params": { 00:22:15.834 "name": "Nvme$subsystem", 00:22:15.834 "trtype": "$TEST_TRANSPORT", 00:22:15.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.834 "adrfam": "ipv4", 00:22:15.834 "trsvcid": "$NVMF_PORT", 00:22:15.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.834 "hdgst": ${hdgst:-false}, 00:22:15.834 "ddgst": ${ddgst:-false} 00:22:15.834 }, 00:22:15.834 "method": "bdev_nvme_attach_controller" 00:22:15.834 } 00:22:15.834 EOF 00:22:15.834 )") 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.834 { 00:22:15.834 "params": { 00:22:15.834 "name": "Nvme$subsystem", 00:22:15.834 "trtype": "$TEST_TRANSPORT", 00:22:15.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.834 "adrfam": "ipv4", 00:22:15.834 "trsvcid": "$NVMF_PORT", 00:22:15.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.834 "hdgst": ${hdgst:-false}, 00:22:15.834 "ddgst": ${ddgst:-false} 00:22:15.834 }, 00:22:15.834 "method": "bdev_nvme_attach_controller" 00:22:15.834 } 00:22:15.834 EOF 00:22:15.834 )") 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.834 { 00:22:15.834 "params": { 00:22:15.834 "name": "Nvme$subsystem", 00:22:15.834 "trtype": "$TEST_TRANSPORT", 00:22:15.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.834 "adrfam": "ipv4", 00:22:15.834 "trsvcid": "$NVMF_PORT", 00:22:15.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.834 "hdgst": ${hdgst:-false}, 00:22:15.834 "ddgst": ${ddgst:-false} 00:22:15.834 }, 00:22:15.834 "method": "bdev_nvme_attach_controller" 00:22:15.834 } 00:22:15.834 EOF 00:22:15.834 )") 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.834 { 00:22:15.834 "params": { 00:22:15.834 "name": "Nvme$subsystem", 00:22:15.834 "trtype": "$TEST_TRANSPORT", 00:22:15.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.834 "adrfam": "ipv4", 00:22:15.834 "trsvcid": "$NVMF_PORT", 00:22:15.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.834 "hdgst": ${hdgst:-false}, 00:22:15.834 "ddgst": ${ddgst:-false} 00:22:15.834 }, 00:22:15.834 "method": "bdev_nvme_attach_controller" 00:22:15.834 } 00:22:15.834 EOF 00:22:15.834 )") 00:22:15.834 [2024-07-12 14:27:07.635123] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:22:15.834 [2024-07-12 14:27:07.635168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610428 ] 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.834 { 00:22:15.834 "params": { 00:22:15.834 "name": "Nvme$subsystem", 00:22:15.834 "trtype": "$TEST_TRANSPORT", 00:22:15.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.834 "adrfam": "ipv4", 00:22:15.834 "trsvcid": "$NVMF_PORT", 00:22:15.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.834 "hdgst": ${hdgst:-false}, 00:22:15.834 "ddgst": ${ddgst:-false} 00:22:15.834 }, 00:22:15.834 "method": "bdev_nvme_attach_controller" 00:22:15.834 } 00:22:15.834 EOF 00:22:15.834 )") 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.834 { 00:22:15.834 "params": { 00:22:15.834 "name": "Nvme$subsystem", 00:22:15.834 "trtype": "$TEST_TRANSPORT", 00:22:15.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.834 "adrfam": "ipv4", 00:22:15.834 "trsvcid": "$NVMF_PORT", 00:22:15.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.834 "hdgst": ${hdgst:-false}, 00:22:15.834 "ddgst": ${ddgst:-false} 00:22:15.834 }, 00:22:15.834 "method": "bdev_nvme_attach_controller" 00:22:15.834 } 00:22:15.834 EOF 00:22:15.834 )") 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.834 { 00:22:15.834 "params": { 00:22:15.834 "name": "Nvme$subsystem", 00:22:15.834 "trtype": "$TEST_TRANSPORT", 00:22:15.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.834 "adrfam": "ipv4", 00:22:15.834 "trsvcid": "$NVMF_PORT", 00:22:15.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.834 "hdgst": ${hdgst:-false}, 00:22:15.834 "ddgst": ${ddgst:-false} 00:22:15.834 }, 00:22:15.834 "method": "bdev_nvme_attach_controller" 00:22:15.834 } 00:22:15.834 EOF 00:22:15.834 )") 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:15.834 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:15.834 14:27:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:15.834 "params": { 00:22:15.834 "name": "Nvme1", 00:22:15.834 "trtype": "tcp", 00:22:15.834 "traddr": "10.0.0.2", 00:22:15.834 "adrfam": "ipv4", 00:22:15.834 "trsvcid": "4420", 00:22:15.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.834 "hdgst": false, 00:22:15.834 "ddgst": false 00:22:15.834 }, 00:22:15.834 "method": "bdev_nvme_attach_controller" 00:22:15.834 },{ 00:22:15.834 "params": { 00:22:15.834 "name": "Nvme2", 00:22:15.834 "trtype": "tcp", 00:22:15.834 "traddr": "10.0.0.2", 00:22:15.834 "adrfam": "ipv4", 00:22:15.834 "trsvcid": "4420", 00:22:15.834 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:15.834 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:15.834 "hdgst": false, 00:22:15.834 "ddgst": false 00:22:15.834 }, 00:22:15.834 "method": "bdev_nvme_attach_controller" 00:22:15.834 },{ 00:22:15.834 "params": { 00:22:15.834 "name": "Nvme3", 00:22:15.834 "trtype": "tcp", 00:22:15.834 "traddr": "10.0.0.2", 00:22:15.834 "adrfam": "ipv4", 00:22:15.834 "trsvcid": "4420", 00:22:15.834 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:15.834 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:15.834 "hdgst": false, 00:22:15.834 "ddgst": false 00:22:15.834 }, 00:22:15.834 "method": "bdev_nvme_attach_controller" 00:22:15.834 },{ 00:22:15.834 "params": { 00:22:15.834 "name": "Nvme4", 00:22:15.834 "trtype": "tcp", 00:22:15.834 "traddr": "10.0.0.2", 00:22:15.834 "adrfam": "ipv4", 00:22:15.834 "trsvcid": "4420", 00:22:15.834 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:15.834 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:15.834 "hdgst": false, 00:22:15.834 "ddgst": false 00:22:15.834 }, 00:22:15.834 "method": "bdev_nvme_attach_controller" 00:22:15.834 },{ 00:22:15.834 "params": { 00:22:15.834 "name": "Nvme5", 00:22:15.834 "trtype": "tcp", 00:22:15.834 "traddr": "10.0.0.2", 00:22:15.834 "adrfam": "ipv4", 00:22:15.834 "trsvcid": "4420", 00:22:15.834 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:15.834 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:15.834 "hdgst": false, 00:22:15.834 "ddgst": false 00:22:15.834 }, 00:22:15.835 "method": "bdev_nvme_attach_controller" 00:22:15.835 },{ 00:22:15.835 "params": { 00:22:15.835 "name": "Nvme6", 00:22:15.835 "trtype": "tcp", 00:22:15.835 "traddr": "10.0.0.2", 00:22:15.835 "adrfam": "ipv4", 00:22:15.835 "trsvcid": "4420", 00:22:15.835 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:15.835 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:15.835 "hdgst": false, 00:22:15.835 "ddgst": false 00:22:15.835 }, 00:22:15.835 "method": "bdev_nvme_attach_controller" 00:22:15.835 },{ 00:22:15.835 "params": { 00:22:15.835 "name": "Nvme7", 00:22:15.835 "trtype": "tcp", 00:22:15.835 "traddr": "10.0.0.2", 00:22:15.835 "adrfam": "ipv4", 00:22:15.835 "trsvcid": "4420", 00:22:15.835 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:15.835 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:15.835 "hdgst": false, 00:22:15.835 "ddgst": false 00:22:15.835 }, 00:22:15.835 "method": "bdev_nvme_attach_controller" 00:22:15.835 },{ 00:22:15.835 "params": { 00:22:15.835 "name": "Nvme8", 00:22:15.835 "trtype": "tcp", 00:22:15.835 "traddr": "10.0.0.2", 00:22:15.835 "adrfam": "ipv4", 00:22:15.835 "trsvcid": "4420", 00:22:15.835 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:15.835 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:15.835 "hdgst": false, 00:22:15.835 "ddgst": false 00:22:15.835 }, 00:22:15.835 "method": "bdev_nvme_attach_controller" 00:22:15.835 },{ 00:22:15.835 "params": { 00:22:15.835 "name": "Nvme9", 00:22:15.835 "trtype": "tcp", 00:22:15.835 "traddr": "10.0.0.2", 00:22:15.835 "adrfam": "ipv4", 00:22:15.835 "trsvcid": "4420", 00:22:15.835 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:15.835 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:15.835 "hdgst": false, 00:22:15.835 "ddgst": false 00:22:15.835 }, 00:22:15.835 "method": "bdev_nvme_attach_controller" 00:22:15.835 },{ 00:22:15.835 "params": { 00:22:15.835 "name": "Nvme10", 00:22:15.835 "trtype": "tcp", 00:22:15.835 "traddr": "10.0.0.2", 00:22:15.835 "adrfam": "ipv4", 00:22:15.835 "trsvcid": "4420", 00:22:15.835 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:15.835 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:15.835 "hdgst": false, 00:22:15.835 "ddgst": false 00:22:15.835 }, 00:22:15.835 "method": "bdev_nvme_attach_controller" 00:22:15.835 }' 00:22:15.835 [2024-07-12 14:27:07.690099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.835 [2024-07-12 14:27:07.763831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.736 Running I/O for 10 seconds... 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.304 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:18.305 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:18.305 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:18.305 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:18.305 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:18.305 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2610147 00:22:18.305 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2610147 ']' 00:22:18.305 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2610147 00:22:18.305 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:22:18.305 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:18.305 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2610147 00:22:18.578 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:18.578 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:18.578 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2610147' 00:22:18.578 killing process with pid 2610147 00:22:18.578 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2610147 00:22:18.578 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2610147 00:22:18.578 [2024-07-12 14:27:10.318161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.578 [2024-07-12 14:27:10.318623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.579 [2024-07-12 14:27:10.318629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50430 is same with the state(5) to be set 00:22:18.579 [2024-07-12 14:27:10.322614] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:18.579 [2024-07-12 14:27:10.323816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.323843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.323865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.323873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.323882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.323890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.323899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.323907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.323915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.323922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.323932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.323939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.323947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.323955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.323963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.323971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.323980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.323987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.323995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.579 [2024-07-12 14:27:10.324338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.579 [2024-07-12 14:27:10.324354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.579 [2024-07-12 14:27:10.324364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.579 [2024-07-12 14:27:10.324372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.579 [2024-07-12 14:27:10.324385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.579 [2024-07-12 14:27:10.324387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 [2024-07-12 14:27:10.324393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.579 [2024-07-12 14:27:10.324396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 [2024-07-12 14:27:10.324400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.579 [2024-07-12 14:27:10.324406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:12[2024-07-12 14:27:10.324407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.579 the state(5) to be set 00:22:18.579 [2024-07-12 14:27:10.324419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 14:27:10.324420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.579 the state(5) to be set 00:22:18.579 [2024-07-12 14:27:10.324430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 [2024-07-12 14:27:10.324437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 [2024-07-12 14:27:10.324444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 [2024-07-12 14:27:10.324452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 [2024-07-12 14:27:10.324460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:12[2024-07-12 14:27:10.324468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with [2024-07-12 14:27:10.324477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:18.580 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 [2024-07-12 14:27:10.324487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 [2024-07-12 14:27:10.324494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 [2024-07-12 14:27:10.324503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 [2024-07-12 14:27:10.324510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 14:27:10.324517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 [2024-07-12 14:27:10.324532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with [2024-07-12 14:27:10.324540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:18.580 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 [2024-07-12 14:27:10.324548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 [2024-07-12 14:27:10.324555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with [2024-07-12 14:27:10.324562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:18.580 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 [2024-07-12 14:27:10.324570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 [2024-07-12 14:27:10.324577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 [2024-07-12 14:27:10.324584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with [2024-07-12 14:27:10.324592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:12the state(5) to be set 00:22:18.580 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 [2024-07-12 14:27:10.324601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 [2024-07-12 14:27:10.324609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 [2024-07-12 14:27:10.324616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 [2024-07-12 14:27:10.324625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 [2024-07-12 14:27:10.324632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 [2024-07-12 14:27:10.324640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:12[2024-07-12 14:27:10.324649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 14:27:10.324658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 [2024-07-12 14:27:10.324674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 [2024-07-12 14:27:10.324681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 [2024-07-12 14:27:10.324689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 [2024-07-12 14:27:10.324696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 [2024-07-12 14:27:10.324710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 [2024-07-12 14:27:10.324717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 [2024-07-12 14:27:10.324723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 [2024-07-12 14:27:10.324731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with [2024-07-12 14:27:10.324738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:12the state(5) to be set 00:22:18.580 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 [2024-07-12 14:27:10.324747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 [2024-07-12 14:27:10.324754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with [2024-07-12 14:27:10.324761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:12the state(5) to be set 00:22:18.580 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.580 [2024-07-12 14:27:10.324770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.580 [2024-07-12 14:27:10.324772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.580 [2024-07-12 14:27:10.324778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.324782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.581 [2024-07-12 14:27:10.324786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.324789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.581 [2024-07-12 14:27:10.324794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.324799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.581 [2024-07-12 14:27:10.324802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.324807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.581 [2024-07-12 14:27:10.324809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.324816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.324817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.581 [2024-07-12 14:27:10.324823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.324825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.581 [2024-07-12 14:27:10.324831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e508d0 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.324835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.581 [2024-07-12 14:27:10.324843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.581 [2024-07-12 14:27:10.324851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.581 [2024-07-12 14:27:10.324859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.581 [2024-07-12 14:27:10.324868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.581 [2024-07-12 14:27:10.324875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.581 [2024-07-12 14:27:10.324883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.581 [2024-07-12 14:27:10.324891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.581 [2024-07-12 14:27:10.324900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.581 [2024-07-12 14:27:10.324907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.581 [2024-07-12 14:27:10.324915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.581 [2024-07-12 14:27:10.324923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.581 [2024-07-12 14:27:10.324932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.581 [2024-07-12 14:27:10.324939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.581 [2024-07-12 14:27:10.324948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.581 [2024-07-12 14:27:10.324954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.581 [2024-07-12 14:27:10.324962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70ef0 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.325350] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b70ef0 was disconnected and freed. reset controller. 00:22:18.581 [2024-07-12 14:27:10.326056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e50d70 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.581 [2024-07-12 14:27:10.326903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.326911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.326917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.326924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.326930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.326936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.326943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.326949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.326955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.326962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.326969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.326976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.326984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.326990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.326996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51230 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.582 [2024-07-12 14:27:10.327944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.327950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.327956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.327963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.327969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.327975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.327981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.327987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.327994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.327999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.328006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.328013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.328019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.328025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.328031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.328038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.328046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.328053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e516f0 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.329899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52030 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.330995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.331011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.331018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.331024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.331031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.583 [2024-07-12 14:27:10.331037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.331419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e52970 is same with the state(5) to be set 00:22:18.584 [2024-07-12 14:27:10.341839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.584 [2024-07-12 14:27:10.341870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.584 [2024-07-12 14:27:10.341884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.584 [2024-07-12 14:27:10.341892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.584 [2024-07-12 14:27:10.341901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.584 [2024-07-12 14:27:10.341908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.584 [2024-07-12 14:27:10.341920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.584 [2024-07-12 14:27:10.341928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.584 [2024-07-12 14:27:10.341943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.584 [2024-07-12 14:27:10.341950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.584 [2024-07-12 14:27:10.341958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.584 [2024-07-12 14:27:10.341965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.584 [2024-07-12 14:27:10.341973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.584 [2024-07-12 14:27:10.341980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.584 [2024-07-12 14:27:10.341988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.584 [2024-07-12 14:27:10.341996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.584 [2024-07-12 14:27:10.342007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.584 [2024-07-12 14:27:10.342014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.584 [2024-07-12 14:27:10.342023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.584 [2024-07-12 14:27:10.342031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.584 [2024-07-12 14:27:10.342039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.584 [2024-07-12 14:27:10.342046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.584 [2024-07-12 14:27:10.342054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.584 [2024-07-12 14:27:10.342061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.584 [2024-07-12 14:27:10.342070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.584 [2024-07-12 14:27:10.342078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.584 [2024-07-12 14:27:10.342087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.584 [2024-07-12 14:27:10.342093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.584 [2024-07-12 14:27:10.342102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.584 [2024-07-12 14:27:10.342109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.584 [2024-07-12 14:27:10.342118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.584 [2024-07-12 14:27:10.342127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.585 [2024-07-12 14:27:10.342802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.585 [2024-07-12 14:27:10.342810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.586 [2024-07-12 14:27:10.342817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.342825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.586 [2024-07-12 14:27:10.342832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.342841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.586 [2024-07-12 14:27:10.342848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.342857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.586 [2024-07-12 14:27:10.342865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.342873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.586 [2024-07-12 14:27:10.342880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.342888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.586 [2024-07-12 14:27:10.342895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.342926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:18.586 [2024-07-12 14:27:10.342981] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b8b2b0 was disconnected and freed. reset controller. 00:22:18.586 [2024-07-12 14:27:10.343984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:18.586 [2024-07-12 14:27:10.344037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfb8b0 (9): Bad file descriptor 00:22:18.586 [2024-07-12 14:27:10.344067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c120d0 is same with the state(5) to be set 00:22:18.586 [2024-07-12 14:27:10.344157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a831d0 is same with the state(5) to be set 00:22:18.586 [2024-07-12 14:27:10.344244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595340 is same with the state(5) to be set 00:22:18.586 [2024-07-12 14:27:10.344329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ab30 is same with the state(5) to be set 00:22:18.586 [2024-07-12 14:27:10.344422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c128d0 is same with the state(5) to be set 00:22:18.586 [2024-07-12 14:27:10.344508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1b050 is same with the state(5) to be set 00:22:18.586 [2024-07-12 14:27:10.344593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a46c70 is same with the state(5) to be set 00:22:18.586 [2024-07-12 14:27:10.344674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.586 [2024-07-12 14:27:10.344704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.586 [2024-07-12 14:27:10.344712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.587 [2024-07-12 14:27:10.344719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.587 [2024-07-12 14:27:10.344726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.587 [2024-07-12 14:27:10.344732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8dbf0 is same with the state(5) to be set 00:22:18.587 [2024-07-12 14:27:10.344753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.587 [2024-07-12 14:27:10.344762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.587 [2024-07-12 14:27:10.344769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.587 [2024-07-12 14:27:10.344776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.587 [2024-07-12 14:27:10.344785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.587 [2024-07-12 14:27:10.344792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.587 [2024-07-12 14:27:10.344799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.587 [2024-07-12 14:27:10.344806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.587 [2024-07-12 14:27:10.344814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69190 is same with the state(5) to be set 00:22:18.587 [2024-07-12 14:27:10.344926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.344939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.344951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.344959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.344968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.344975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.344984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.344991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.588 [2024-07-12 14:27:10.345567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.588 [2024-07-12 14:27:10.345575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.345960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.345969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346033] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b0f920 was disconnected and freed. reset controller. 00:22:18.589 [2024-07-12 14:27:10.346167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.589 [2024-07-12 14:27:10.346451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.589 [2024-07-12 14:27:10.346460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.346991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.346997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.347007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.347014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.347023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.347030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.347038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.347045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.347054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.347061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.347069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.347077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.347085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.347092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.347101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.347108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.347118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.347126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.347134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.590 [2024-07-12 14:27:10.347142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.590 [2024-07-12 14:27:10.347151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.591 [2024-07-12 14:27:10.347157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.591 [2024-07-12 14:27:10.347166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.591 [2024-07-12 14:27:10.347173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.591 [2024-07-12 14:27:10.347181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.591 [2024-07-12 14:27:10.347193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.591 [2024-07-12 14:27:10.347201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.591 [2024-07-12 14:27:10.347209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.591 [2024-07-12 14:27:10.347272] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b88910 was disconnected and freed. reset controller. 00:22:18.591 [2024-07-12 14:27:10.350634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:18.591 [2024-07-12 14:27:10.350663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:18.591 [2024-07-12 14:27:10.350673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:18.591 [2024-07-12 14:27:10.350686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8ab30 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.350697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1b050 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.350706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8dbf0 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.350899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.591 [2024-07-12 14:27:10.350912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfb8b0 with addr=10.0.0.2, port=4420 00:22:18.591 [2024-07-12 14:27:10.350920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfb8b0 is same with the state(5) to be set 00:22:18.591 [2024-07-12 14:27:10.351443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfb8b0 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.351502] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:18.591 [2024-07-12 14:27:10.352131] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:18.591 [2024-07-12 14:27:10.352191] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:18.591 [2024-07-12 14:27:10.352319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.591 [2024-07-12 14:27:10.352332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a8dbf0 with addr=10.0.0.2, port=4420 00:22:18.591 [2024-07-12 14:27:10.352343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8dbf0 is same with the state(5) to be set 00:22:18.591 [2024-07-12 14:27:10.352424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.591 [2024-07-12 14:27:10.352436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b050 with addr=10.0.0.2, port=4420 00:22:18.591 [2024-07-12 14:27:10.352442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1b050 is same with the state(5) to be set 00:22:18.591 [2024-07-12 14:27:10.352528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.591 [2024-07-12 14:27:10.352538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a8ab30 with addr=10.0.0.2, port=4420 00:22:18.591 [2024-07-12 14:27:10.352546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ab30 is same with the state(5) to be set 00:22:18.591 [2024-07-12 14:27:10.352553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:18.591 [2024-07-12 14:27:10.352559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:18.591 [2024-07-12 14:27:10.352568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:18.591 [2024-07-12 14:27:10.352643] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:18.591 [2024-07-12 14:27:10.352690] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:18.591 [2024-07-12 14:27:10.352735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.591 [2024-07-12 14:27:10.352748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8dbf0 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.352759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1b050 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.352768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8ab30 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.352832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:18.591 [2024-07-12 14:27:10.352841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:18.591 [2024-07-12 14:27:10.352849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:18.591 [2024-07-12 14:27:10.352862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:18.591 [2024-07-12 14:27:10.352870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:18.591 [2024-07-12 14:27:10.352877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:18.591 [2024-07-12 14:27:10.352888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:18.591 [2024-07-12 14:27:10.352895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:18.591 [2024-07-12 14:27:10.352902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:18.591 [2024-07-12 14:27:10.352936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.591 [2024-07-12 14:27:10.352944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.591 [2024-07-12 14:27:10.352950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.591 [2024-07-12 14:27:10.354005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c120d0 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.354027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a831d0 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.354042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1595340 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.354061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c128d0 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.354078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a46c70 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.354094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a69190 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.358424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:18.591 [2024-07-12 14:27:10.358703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.591 [2024-07-12 14:27:10.358717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfb8b0 with addr=10.0.0.2, port=4420 00:22:18.591 [2024-07-12 14:27:10.358725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfb8b0 is same with the state(5) to be set 00:22:18.591 [2024-07-12 14:27:10.358756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfb8b0 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.358787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:18.591 [2024-07-12 14:27:10.358796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:18.591 [2024-07-12 14:27:10.358804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:18.591 [2024-07-12 14:27:10.358836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.591 [2024-07-12 14:27:10.361535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:18.591 [2024-07-12 14:27:10.361550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:18.591 [2024-07-12 14:27:10.361558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:18.591 [2024-07-12 14:27:10.361776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.591 [2024-07-12 14:27:10.361791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a8ab30 with addr=10.0.0.2, port=4420 00:22:18.591 [2024-07-12 14:27:10.361799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ab30 is same with the state(5) to be set 00:22:18.591 [2024-07-12 14:27:10.361977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.591 [2024-07-12 14:27:10.361989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b050 with addr=10.0.0.2, port=4420 00:22:18.591 [2024-07-12 14:27:10.361998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1b050 is same with the state(5) to be set 00:22:18.591 [2024-07-12 14:27:10.362141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.591 [2024-07-12 14:27:10.362153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a8dbf0 with addr=10.0.0.2, port=4420 00:22:18.591 [2024-07-12 14:27:10.362161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8dbf0 is same with the state(5) to be set 00:22:18.591 [2024-07-12 14:27:10.362193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8ab30 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.362204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1b050 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.362214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8dbf0 (9): Bad file descriptor 00:22:18.591 [2024-07-12 14:27:10.362243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:18.591 [2024-07-12 14:27:10.362252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:18.591 [2024-07-12 14:27:10.362262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:18.591 [2024-07-12 14:27:10.362274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:18.591 [2024-07-12 14:27:10.362281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:18.591 [2024-07-12 14:27:10.362288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:18.591 [2024-07-12 14:27:10.362298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:18.591 [2024-07-12 14:27:10.362304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:18.591 [2024-07-12 14:27:10.362311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:18.591 [2024-07-12 14:27:10.362342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.591 [2024-07-12 14:27:10.362350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.591 [2024-07-12 14:27:10.362357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.591 [2024-07-12 14:27:10.364139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.592 [2024-07-12 14:27:10.364725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.592 [2024-07-12 14:27:10.364732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.364750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.364765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.364781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.364797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.364813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.364829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.364846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.364862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.364877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.364892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.364908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.364925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.364941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.364957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.364973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.364990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.364999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.365006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.365015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.365022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.365031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.365039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.365048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.365055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.365065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.365072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.365080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.365087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.365096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.365104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.365113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.365121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.365132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.365139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.365147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.365155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.365164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.365170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.365179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.365186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.365194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.365201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.365209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b87ea0 is same with the state(5) to be set 00:22:18.593 [2024-07-12 14:27:10.366241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.366258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.366269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.366276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.366285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.366293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.366302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.366308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.366318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.366325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.366334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.366341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.366350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.366357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.366369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.366380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.366390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.366397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.366405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.366413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.366422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.366431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.366440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.366448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.593 [2024-07-12 14:27:10.366456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.593 [2024-07-12 14:27:10.366464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.366986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.366994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.367003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.367011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.367019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.367026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.367036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.367043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.367051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.367058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.367066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.367073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.367082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.367089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.594 [2024-07-12 14:27:10.367098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.594 [2024-07-12 14:27:10.367105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.367114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.367122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.367131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.367139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.367148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.367155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.367164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.367171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.367181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.367188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.367197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.367205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.367213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.367221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.367230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.367237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.367246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.367253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.367261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.367269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.367278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.367286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.367294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0e490 is same with the state(5) to be set 00:22:18.595 [2024-07-12 14:27:10.368303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.595 [2024-07-12 14:27:10.368816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.595 [2024-07-12 14:27:10.368824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.368833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.368840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.368849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.368856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.368864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.368871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.368880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.368887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.368898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.368905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.368914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.368924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.368934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.368942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.368951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.368958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.368966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.368973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.368982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.368989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.368997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.369263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.369270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a40b70 is same with the state(5) to be set 00:22:18.596 [2024-07-12 14:27:10.370242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.370255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.370267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.370274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.596 [2024-07-12 14:27:10.370283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.596 [2024-07-12 14:27:10.370291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.597 [2024-07-12 14:27:10.370962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.597 [2024-07-12 14:27:10.370970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.370977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.370985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.370993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.371278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.371286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a42040 is same with the state(5) to be set 00:22:18.598 [2024-07-12 14:27:10.372289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.598 [2024-07-12 14:27:10.372627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.598 [2024-07-12 14:27:10.372640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.372989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.372998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.599 [2024-07-12 14:27:10.373328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.599 [2024-07-12 14:27:10.373335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.373344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.373351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.373359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b89de0 is same with the state(5) to be set 00:22:18.600 [2024-07-12 14:27:10.374371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.600 [2024-07-12 14:27:10.374939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.600 [2024-07-12 14:27:10.374946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.374955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.374963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.374971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.374978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.374987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.374994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.601 [2024-07-12 14:27:10.375418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.601 [2024-07-12 14:27:10.375426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6fa60 is same with the state(5) to be set 00:22:18.601 [2024-07-12 14:27:10.376673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:18.601 [2024-07-12 14:27:10.376692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:18.601 [2024-07-12 14:27:10.376702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:18.601 [2024-07-12 14:27:10.376711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:18.601 [2024-07-12 14:27:10.376781] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:18.601 [2024-07-12 14:27:10.376799] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:18.601 [2024-07-12 14:27:10.376864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:18.601 task offset: 24448 on job bdev=Nvme10n1 fails 00:22:18.601 00:22:18.601 Latency(us) 00:22:18.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.601 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.601 Job: Nvme1n1 ended in about 0.76 seconds with error 00:22:18.601 Verification LBA range: start 0x0 length 0x400 00:22:18.601 Nvme1n1 : 0.76 167.39 10.46 83.70 0.00 251960.99 31001.38 233422.14 00:22:18.601 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.601 Job: Nvme2n1 ended in about 0.77 seconds with error 00:22:18.601 Verification LBA range: start 0x0 length 0x400 00:22:18.601 Nvme2n1 : 0.77 166.94 10.43 83.47 0.00 247329.24 17210.32 224304.08 00:22:18.601 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.601 Job: Nvme3n1 ended in about 0.75 seconds with error 00:22:18.601 Verification LBA range: start 0x0 length 0x400 00:22:18.601 Nvme3n1 : 0.75 256.73 16.05 85.58 0.00 176778.02 15158.76 192390.90 00:22:18.601 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.601 Job: Nvme4n1 ended in about 0.77 seconds with error 00:22:18.601 Verification LBA range: start 0x0 length 0x400 00:22:18.601 Nvme4n1 : 0.77 174.33 10.90 75.45 0.00 236876.13 17096.35 214274.23 00:22:18.601 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.601 Job: Nvme5n1 ended in about 0.77 seconds with error 00:22:18.601 Verification LBA range: start 0x0 length 0x400 00:22:18.601 Nvme5n1 : 0.77 166.08 10.38 83.04 0.00 232854.93 18692.01 229774.91 00:22:18.601 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.601 Job: Nvme6n1 ended in about 0.75 seconds with error 00:22:18.601 Verification LBA range: start 0x0 length 0x400 00:22:18.601 Nvme6n1 : 0.75 256.40 16.02 85.47 0.00 165177.32 8035.28 208803.39 00:22:18.601 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.601 Job: Nvme7n1 ended in about 0.77 seconds with error 00:22:18.601 Verification LBA range: start 0x0 length 0x400 00:22:18.601 Nvme7n1 : 0.77 165.64 10.35 82.82 0.00 222986.24 14588.88 213362.42 00:22:18.601 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.601 Job: Nvme8n1 ended in about 0.75 seconds with error 00:22:18.601 Verification LBA range: start 0x0 length 0x400 00:22:18.601 Nvme8n1 : 0.75 257.10 16.07 85.70 0.00 156770.78 5784.26 208803.39 00:22:18.601 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.601 Job: Nvme9n1 ended in about 0.77 seconds with error 00:22:18.601 Verification LBA range: start 0x0 length 0x400 00:22:18.602 Nvme9n1 : 0.77 165.20 10.32 82.60 0.00 213185.82 16868.40 206979.78 00:22:18.602 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.602 Job: Nvme10n1 ended in about 0.74 seconds with error 00:22:18.602 Verification LBA range: start 0x0 length 0x400 00:22:18.602 Nvme10n1 : 0.74 172.38 10.77 86.19 0.00 197117.92 18236.10 242540.19 00:22:18.602 =================================================================================================================== 00:22:18.602 Total : 1948.18 121.76 834.01 0.00 206116.31 5784.26 242540.19 00:22:18.602 [2024-07-12 14:27:10.398057] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:18.602 [2024-07-12 14:27:10.398091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:18.602 [2024-07-12 14:27:10.398355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.602 [2024-07-12 14:27:10.398374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a46c70 with addr=10.0.0.2, port=4420 00:22:18.602 [2024-07-12 14:27:10.398390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a46c70 is same with the state(5) to be set 00:22:18.602 [2024-07-12 14:27:10.398556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.602 [2024-07-12 14:27:10.398569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c128d0 with addr=10.0.0.2, port=4420 00:22:18.602 [2024-07-12 14:27:10.398577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c128d0 is same with the state(5) to be set 00:22:18.602 [2024-07-12 14:27:10.398728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.602 [2024-07-12 14:27:10.398740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a831d0 with addr=10.0.0.2, port=4420 00:22:18.602 [2024-07-12 14:27:10.398748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a831d0 is same with the state(5) to be set 00:22:18.602 [2024-07-12 14:27:10.398896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.602 [2024-07-12 14:27:10.398908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1595340 with addr=10.0.0.2, port=4420 00:22:18.602 [2024-07-12 14:27:10.398915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595340 is same with the state(5) to be set 00:22:18.602 [2024-07-12 14:27:10.400300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:18.602 [2024-07-12 14:27:10.400317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:18.602 [2024-07-12 14:27:10.400327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:18.602 [2024-07-12 14:27:10.400336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:18.602 [2024-07-12 14:27:10.400631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.602 [2024-07-12 14:27:10.400648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a69190 with addr=10.0.0.2, port=4420 00:22:18.602 [2024-07-12 14:27:10.400657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69190 is same with the state(5) to be set 00:22:18.602 [2024-07-12 14:27:10.400761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.602 [2024-07-12 14:27:10.400773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c120d0 with addr=10.0.0.2, port=4420 00:22:18.602 [2024-07-12 14:27:10.400782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c120d0 is same with the state(5) to be set 00:22:18.602 [2024-07-12 14:27:10.400794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a46c70 (9): Bad file descriptor 00:22:18.602 [2024-07-12 14:27:10.400805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c128d0 (9): Bad file descriptor 00:22:18.602 [2024-07-12 14:27:10.400815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a831d0 (9): Bad file descriptor 00:22:18.602 [2024-07-12 14:27:10.400824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1595340 (9): Bad file descriptor 00:22:18.602 [2024-07-12 14:27:10.400856] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:18.602 [2024-07-12 14:27:10.400868] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:18.602 [2024-07-12 14:27:10.400877] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:18.602 [2024-07-12 14:27:10.400888] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:18.602 [2024-07-12 14:27:10.401326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.602 [2024-07-12 14:27:10.401345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfb8b0 with addr=10.0.0.2, port=4420 00:22:18.602 [2024-07-12 14:27:10.401353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfb8b0 is same with the state(5) to be set 00:22:18.602 [2024-07-12 14:27:10.401530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.602 [2024-07-12 14:27:10.401542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a8dbf0 with addr=10.0.0.2, port=4420 00:22:18.602 [2024-07-12 14:27:10.401550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8dbf0 is same with the state(5) to be set 00:22:18.602 [2024-07-12 14:27:10.401753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.602 [2024-07-12 14:27:10.401765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1b050 with addr=10.0.0.2, port=4420 00:22:18.602 [2024-07-12 14:27:10.401773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1b050 is same with the state(5) to be set 00:22:18.602 [2024-07-12 14:27:10.401850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.602 [2024-07-12 14:27:10.401862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a8ab30 with addr=10.0.0.2, port=4420 00:22:18.602 [2024-07-12 14:27:10.401869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ab30 is same with the state(5) to be set 00:22:18.602 [2024-07-12 14:27:10.401882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a69190 (9): Bad file descriptor 00:22:18.602 [2024-07-12 14:27:10.401892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c120d0 (9): Bad file descriptor 00:22:18.602 [2024-07-12 14:27:10.401901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:18.602 [2024-07-12 14:27:10.401908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:18.602 [2024-07-12 14:27:10.401917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:18.602 [2024-07-12 14:27:10.401930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:18.602 [2024-07-12 14:27:10.401937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:18.602 [2024-07-12 14:27:10.401943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:18.602 [2024-07-12 14:27:10.401956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:18.602 [2024-07-12 14:27:10.401963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:18.602 [2024-07-12 14:27:10.401969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:18.602 [2024-07-12 14:27:10.401980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:18.602 [2024-07-12 14:27:10.401987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:18.602 [2024-07-12 14:27:10.401994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:18.602 [2024-07-12 14:27:10.402063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.602 [2024-07-12 14:27:10.402073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.602 [2024-07-12 14:27:10.402079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.602 [2024-07-12 14:27:10.402085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.602 [2024-07-12 14:27:10.402093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfb8b0 (9): Bad file descriptor 00:22:18.602 [2024-07-12 14:27:10.402101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8dbf0 (9): Bad file descriptor 00:22:18.602 [2024-07-12 14:27:10.402111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1b050 (9): Bad file descriptor 00:22:18.602 [2024-07-12 14:27:10.402120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8ab30 (9): Bad file descriptor 00:22:18.602 [2024-07-12 14:27:10.402127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:18.602 [2024-07-12 14:27:10.402137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:18.602 [2024-07-12 14:27:10.402144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:18.602 [2024-07-12 14:27:10.402153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:18.602 [2024-07-12 14:27:10.402159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:18.602 [2024-07-12 14:27:10.402166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:18.602 [2024-07-12 14:27:10.402192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.602 [2024-07-12 14:27:10.402200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.602 [2024-07-12 14:27:10.402206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:18.602 [2024-07-12 14:27:10.402212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:18.602 [2024-07-12 14:27:10.402220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:18.602 [2024-07-12 14:27:10.402229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:18.602 [2024-07-12 14:27:10.402235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:18.602 [2024-07-12 14:27:10.402241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:18.602 [2024-07-12 14:27:10.402250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:18.602 [2024-07-12 14:27:10.402256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:18.602 [2024-07-12 14:27:10.402263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:18.602 [2024-07-12 14:27:10.402271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:18.602 [2024-07-12 14:27:10.402278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:18.602 [2024-07-12 14:27:10.402285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:18.602 [2024-07-12 14:27:10.402310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.602 [2024-07-12 14:27:10.402319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.602 [2024-07-12 14:27:10.402326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.602 [2024-07-12 14:27:10.402333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.862 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:18.862 14:27:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:19.797 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2610428 00:22:19.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2610428) - No such process 00:22:19.797 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:22:19.797 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:19.797 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:19.797 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:19.797 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:19.797 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:19.797 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:19.797 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:19.797 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:19.797 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:19.797 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:19.797 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:19.797 rmmod nvme_tcp 00:22:19.797 rmmod nvme_fabrics 00:22:19.797 rmmod nvme_keyring 00:22:19.797 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.055 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:20.055 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:20.055 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:20.055 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:20.055 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:20.055 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:20.055 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.055 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:20.055 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.055 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.055 14:27:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.955 14:27:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:21.955 00:22:21.955 real 0m8.013s 00:22:21.955 user 0m20.493s 00:22:21.955 sys 0m1.241s 00:22:21.955 14:27:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:21.955 14:27:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:21.955 ************************************ 00:22:21.955 END TEST nvmf_shutdown_tc3 00:22:21.955 ************************************ 00:22:21.955 14:27:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:21.955 14:27:13 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:21.955 00:22:21.955 real 0m30.972s 00:22:21.955 user 1m20.168s 00:22:21.955 sys 0m7.876s 00:22:21.955 14:27:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:21.955 14:27:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:21.955 ************************************ 00:22:21.955 END TEST nvmf_shutdown 00:22:21.955 ************************************ 00:22:21.955 14:27:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:21.955 14:27:13 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:22:21.955 14:27:13 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:21.955 14:27:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:22.213 14:27:13 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:22:22.213 14:27:13 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:22.213 14:27:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:22.213 14:27:13 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:22:22.213 14:27:13 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:22.214 14:27:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:22.214 14:27:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:22.214 14:27:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:22.214 ************************************ 00:22:22.214 START TEST nvmf_multicontroller 00:22:22.214 ************************************ 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:22.214 * Looking for test storage... 00:22:22.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:22.214 14:27:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:27.486 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:27.486 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.486 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:27.487 Found net devices under 0000:86:00.0: cvl_0_0 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:27.487 Found net devices under 0000:86:00.1: cvl_0_1 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:27.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:22:27.487 00:22:27.487 --- 10.0.0.2 ping statistics --- 00:22:27.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.487 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:22:27.487 00:22:27.487 --- 10.0.0.1 ping statistics --- 00:22:27.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.487 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2614858 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2614858 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2614858 ']' 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:27.487 14:27:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.487 [2024-07-12 14:27:19.464341] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:22:27.487 [2024-07-12 14:27:19.464400] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.487 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.746 [2024-07-12 14:27:19.521511] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:27.746 [2024-07-12 14:27:19.601524] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.746 [2024-07-12 14:27:19.601558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.746 [2024-07-12 14:27:19.601565] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.746 [2024-07-12 14:27:19.601571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.746 [2024-07-12 14:27:19.601576] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.746 [2024-07-12 14:27:19.601676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.746 [2024-07-12 14:27:19.601760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.746 [2024-07-12 14:27:19.601762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.314 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.314 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:22:28.314 14:27:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.314 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:28.314 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.314 14:27:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.314 14:27:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:28.314 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.314 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.314 [2024-07-12 14:27:20.320884] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.574 Malloc0 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.574 [2024-07-12 14:27:20.395960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.574 [2024-07-12 14:27:20.403913] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.574 Malloc1 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.574 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.575 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.575 14:27:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:28.575 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.575 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.575 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.575 14:27:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2615106 00:22:28.575 14:27:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:28.575 14:27:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:28.575 14:27:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2615106 /var/tmp/bdevperf.sock 00:22:28.575 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2615106 ']' 00:22:28.575 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.575 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.575 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.575 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.575 14:27:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.513 NVMe0n1 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.513 1 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.513 request: 00:22:29.513 { 00:22:29.513 "name": "NVMe0", 00:22:29.513 "trtype": "tcp", 00:22:29.513 "traddr": "10.0.0.2", 00:22:29.513 "adrfam": "ipv4", 00:22:29.513 "trsvcid": "4420", 00:22:29.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.513 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:29.513 "hostaddr": "10.0.0.2", 00:22:29.513 "hostsvcid": "60000", 00:22:29.513 "prchk_reftag": false, 00:22:29.513 "prchk_guard": false, 00:22:29.513 "hdgst": false, 00:22:29.513 "ddgst": false, 00:22:29.513 "method": "bdev_nvme_attach_controller", 00:22:29.513 "req_id": 1 00:22:29.513 } 00:22:29.513 Got JSON-RPC error response 00:22:29.513 response: 00:22:29.513 { 00:22:29.513 "code": -114, 00:22:29.513 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:29.513 } 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.513 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.513 request: 00:22:29.513 { 00:22:29.513 "name": "NVMe0", 00:22:29.513 "trtype": "tcp", 00:22:29.513 "traddr": "10.0.0.2", 00:22:29.513 "adrfam": "ipv4", 00:22:29.513 "trsvcid": "4420", 00:22:29.513 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:29.513 "hostaddr": "10.0.0.2", 00:22:29.513 "hostsvcid": "60000", 00:22:29.513 "prchk_reftag": false, 00:22:29.513 "prchk_guard": false, 00:22:29.513 "hdgst": false, 00:22:29.514 "ddgst": false, 00:22:29.514 "method": "bdev_nvme_attach_controller", 00:22:29.514 "req_id": 1 00:22:29.514 } 00:22:29.514 Got JSON-RPC error response 00:22:29.514 response: 00:22:29.514 { 00:22:29.514 "code": -114, 00:22:29.514 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:29.514 } 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.514 request: 00:22:29.514 { 00:22:29.514 "name": "NVMe0", 00:22:29.514 "trtype": "tcp", 00:22:29.514 "traddr": "10.0.0.2", 00:22:29.514 "adrfam": "ipv4", 00:22:29.514 "trsvcid": "4420", 00:22:29.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.514 "hostaddr": "10.0.0.2", 00:22:29.514 "hostsvcid": "60000", 00:22:29.514 "prchk_reftag": false, 00:22:29.514 "prchk_guard": false, 00:22:29.514 "hdgst": false, 00:22:29.514 "ddgst": false, 00:22:29.514 "multipath": "disable", 00:22:29.514 "method": "bdev_nvme_attach_controller", 00:22:29.514 "req_id": 1 00:22:29.514 } 00:22:29.514 Got JSON-RPC error response 00:22:29.514 response: 00:22:29.514 { 00:22:29.514 "code": -114, 00:22:29.514 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:29.514 } 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.514 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.773 request: 00:22:29.773 { 00:22:29.773 "name": "NVMe0", 00:22:29.773 "trtype": "tcp", 00:22:29.773 "traddr": "10.0.0.2", 00:22:29.773 "adrfam": "ipv4", 00:22:29.773 "trsvcid": "4420", 00:22:29.773 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.773 "hostaddr": "10.0.0.2", 00:22:29.773 "hostsvcid": "60000", 00:22:29.773 "prchk_reftag": false, 00:22:29.773 "prchk_guard": false, 00:22:29.773 "hdgst": false, 00:22:29.773 "ddgst": false, 00:22:29.773 "multipath": "failover", 00:22:29.773 "method": "bdev_nvme_attach_controller", 00:22:29.773 "req_id": 1 00:22:29.773 } 00:22:29.773 Got JSON-RPC error response 00:22:29.773 response: 00:22:29.773 { 00:22:29.773 "code": -114, 00:22:29.773 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:29.773 } 00:22:29.773 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:29.773 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:29.773 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:29.773 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:29.773 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:29.773 14:27:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:29.773 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.773 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.773 00:22:29.773 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.773 14:27:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:29.773 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.773 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.773 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.773 14:27:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:29.773 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.773 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.032 00:22:30.032 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.032 14:27:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:30.032 14:27:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:30.032 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.032 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.032 14:27:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.032 14:27:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:30.032 14:27:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:30.970 0 00:22:30.970 14:27:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:30.970 14:27:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.970 14:27:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.970 14:27:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.970 14:27:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2615106 00:22:30.970 14:27:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2615106 ']' 00:22:30.970 14:27:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2615106 00:22:30.970 14:27:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:30.970 14:27:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:30.970 14:27:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2615106 00:22:31.229 14:27:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:31.229 14:27:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:31.229 14:27:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2615106' 00:22:31.229 killing process with pid 2615106 00:22:31.229 14:27:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2615106 00:22:31.229 14:27:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2615106 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:22:31.229 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:31.229 [2024-07-12 14:27:20.501341] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:22:31.229 [2024-07-12 14:27:20.501395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2615106 ] 00:22:31.229 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.229 [2024-07-12 14:27:20.556102] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.229 [2024-07-12 14:27:20.629369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.229 [2024-07-12 14:27:21.801157] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 68f491ed-f296-4620-9aad-53d092ba3d2d already exists 00:22:31.229 [2024-07-12 14:27:21.801188] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:68f491ed-f296-4620-9aad-53d092ba3d2d alias for bdev NVMe1n1 00:22:31.229 [2024-07-12 14:27:21.801196] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:31.229 Running I/O for 1 seconds... 00:22:31.229 00:22:31.229 Latency(us) 00:22:31.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.229 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:31.229 NVMe0n1 : 1.00 24552.99 95.91 0.00 0.00 5201.67 3291.05 11340.58 00:22:31.229 =================================================================================================================== 00:22:31.229 Total : 24552.99 95.91 0.00 0.00 5201.67 3291.05 11340.58 00:22:31.229 Received shutdown signal, test time was about 1.000000 seconds 00:22:31.229 00:22:31.229 Latency(us) 00:22:31.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.229 =================================================================================================================== 00:22:31.229 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.229 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:31.229 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:31.229 rmmod nvme_tcp 00:22:31.488 rmmod nvme_fabrics 00:22:31.488 rmmod nvme_keyring 00:22:31.488 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:31.488 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:31.488 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:31.488 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2614858 ']' 00:22:31.488 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2614858 00:22:31.488 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2614858 ']' 00:22:31.488 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2614858 00:22:31.488 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:31.488 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:31.488 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2614858 00:22:31.488 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:31.488 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:31.488 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2614858' 00:22:31.488 killing process with pid 2614858 00:22:31.488 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2614858 00:22:31.488 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2614858 00:22:31.747 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:31.747 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:31.747 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:31.747 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:31.747 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:31.747 14:27:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.747 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.747 14:27:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.663 14:27:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:33.663 00:22:33.663 real 0m11.603s 00:22:33.663 user 0m16.430s 00:22:33.663 sys 0m4.602s 00:22:33.663 14:27:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:33.663 14:27:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:33.663 ************************************ 00:22:33.663 END TEST nvmf_multicontroller 00:22:33.663 ************************************ 00:22:33.663 14:27:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:33.663 14:27:25 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:33.663 14:27:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:33.663 14:27:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:33.663 14:27:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:33.975 ************************************ 00:22:33.975 START TEST nvmf_aer 00:22:33.975 ************************************ 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:33.975 * Looking for test storage... 00:22:33.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:33.975 14:27:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:39.250 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:39.250 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:39.251 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:39.251 Found net devices under 0000:86:00.0: cvl_0_0 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:39.251 Found net devices under 0000:86:00.1: cvl_0_1 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.251 14:27:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:39.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:22:39.251 00:22:39.251 --- 10.0.0.2 ping statistics --- 00:22:39.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.251 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:22:39.251 00:22:39.251 --- 10.0.0.1 ping statistics --- 00:22:39.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.251 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2619110 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2619110 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2619110 ']' 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.251 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:39.251 [2024-07-12 14:27:31.107257] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:22:39.251 [2024-07-12 14:27:31.107302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.251 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.251 [2024-07-12 14:27:31.170348] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.251 [2024-07-12 14:27:31.249464] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.251 [2024-07-12 14:27:31.249500] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.251 [2024-07-12 14:27:31.249507] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.251 [2024-07-12 14:27:31.249512] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.251 [2024-07-12 14:27:31.249517] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.251 [2024-07-12 14:27:31.249612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.251 [2024-07-12 14:27:31.249708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.251 [2024-07-12 14:27:31.249779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.251 [2024-07-12 14:27:31.249780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:40.187 [2024-07-12 14:27:31.951408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:40.187 Malloc0 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.187 14:27:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:40.187 [2024-07-12 14:27:32.002889] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:40.187 [ 00:22:40.187 { 00:22:40.187 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:40.187 "subtype": "Discovery", 00:22:40.187 "listen_addresses": [], 00:22:40.187 "allow_any_host": true, 00:22:40.187 "hosts": [] 00:22:40.187 }, 00:22:40.187 { 00:22:40.187 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.187 "subtype": "NVMe", 00:22:40.187 "listen_addresses": [ 00:22:40.187 { 00:22:40.187 "trtype": "TCP", 00:22:40.187 "adrfam": "IPv4", 00:22:40.187 "traddr": "10.0.0.2", 00:22:40.187 "trsvcid": "4420" 00:22:40.187 } 00:22:40.187 ], 00:22:40.187 "allow_any_host": true, 00:22:40.187 "hosts": [], 00:22:40.187 "serial_number": "SPDK00000000000001", 00:22:40.187 "model_number": "SPDK bdev Controller", 00:22:40.187 "max_namespaces": 2, 00:22:40.187 "min_cntlid": 1, 00:22:40.187 "max_cntlid": 65519, 00:22:40.187 "namespaces": [ 00:22:40.187 { 00:22:40.187 "nsid": 1, 00:22:40.187 "bdev_name": "Malloc0", 00:22:40.187 "name": "Malloc0", 00:22:40.187 "nguid": "0DA3E0E7502C4A07B9DF0F3AA5C81B61", 00:22:40.187 "uuid": "0da3e0e7-502c-4a07-b9df-0f3aa5c81b61" 00:22:40.187 } 00:22:40.187 ] 00:22:40.187 } 00:22:40.187 ] 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2619149 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:40.187 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:40.187 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:40.446 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:40.446 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:22:40.446 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:22:40.446 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:40.446 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:40.446 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:40.446 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:40.446 14:27:32 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:40.446 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.446 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:40.446 Malloc1 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:40.447 Asynchronous Event Request test 00:22:40.447 Attaching to 10.0.0.2 00:22:40.447 Attached to 10.0.0.2 00:22:40.447 Registering asynchronous event callbacks... 00:22:40.447 Starting namespace attribute notice tests for all controllers... 00:22:40.447 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:40.447 aer_cb - Changed Namespace 00:22:40.447 Cleaning up... 00:22:40.447 [ 00:22:40.447 { 00:22:40.447 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:40.447 "subtype": "Discovery", 00:22:40.447 "listen_addresses": [], 00:22:40.447 "allow_any_host": true, 00:22:40.447 "hosts": [] 00:22:40.447 }, 00:22:40.447 { 00:22:40.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.447 "subtype": "NVMe", 00:22:40.447 "listen_addresses": [ 00:22:40.447 { 00:22:40.447 "trtype": "TCP", 00:22:40.447 "adrfam": "IPv4", 00:22:40.447 "traddr": "10.0.0.2", 00:22:40.447 "trsvcid": "4420" 00:22:40.447 } 00:22:40.447 ], 00:22:40.447 "allow_any_host": true, 00:22:40.447 "hosts": [], 00:22:40.447 "serial_number": "SPDK00000000000001", 00:22:40.447 "model_number": "SPDK bdev Controller", 00:22:40.447 "max_namespaces": 2, 00:22:40.447 "min_cntlid": 1, 00:22:40.447 "max_cntlid": 65519, 00:22:40.447 "namespaces": [ 00:22:40.447 { 00:22:40.447 "nsid": 1, 00:22:40.447 "bdev_name": "Malloc0", 00:22:40.447 "name": "Malloc0", 00:22:40.447 "nguid": "0DA3E0E7502C4A07B9DF0F3AA5C81B61", 00:22:40.447 "uuid": "0da3e0e7-502c-4a07-b9df-0f3aa5c81b61" 00:22:40.447 }, 00:22:40.447 { 00:22:40.447 "nsid": 2, 00:22:40.447 "bdev_name": "Malloc1", 00:22:40.447 "name": "Malloc1", 00:22:40.447 "nguid": "23FBE908EA4B4650885E34A6FC3C44C2", 00:22:40.447 "uuid": "23fbe908-ea4b-4650-885e-34a6fc3c44c2" 00:22:40.447 } 00:22:40.447 ] 00:22:40.447 } 00:22:40.447 ] 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2619149 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.447 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:40.706 rmmod nvme_tcp 00:22:40.706 rmmod nvme_fabrics 00:22:40.706 rmmod nvme_keyring 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2619110 ']' 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2619110 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2619110 ']' 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2619110 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2619110 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2619110' 00:22:40.706 killing process with pid 2619110 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2619110 00:22:40.706 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2619110 00:22:40.965 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:40.965 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:40.965 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:40.965 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:40.965 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:40.965 14:27:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.965 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.965 14:27:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.868 14:27:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:42.868 00:22:42.868 real 0m9.133s 00:22:42.868 user 0m7.539s 00:22:42.868 sys 0m4.369s 00:22:42.868 14:27:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:42.868 14:27:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:42.868 ************************************ 00:22:42.868 END TEST nvmf_aer 00:22:42.868 ************************************ 00:22:42.868 14:27:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:42.868 14:27:34 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:42.868 14:27:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:42.868 14:27:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.868 14:27:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:43.128 ************************************ 00:22:43.128 START TEST nvmf_async_init 00:22:43.128 ************************************ 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:43.128 * Looking for test storage... 00:22:43.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.128 14:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=efb5b02c92474a9d85bf4539698c1842 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:43.128 14:27:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:48.430 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.430 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:48.431 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:48.431 Found net devices under 0000:86:00.0: cvl_0_0 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:48.431 Found net devices under 0000:86:00.1: cvl_0_1 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:48.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:22:48.431 00:22:48.431 --- 10.0.0.2 ping statistics --- 00:22:48.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.431 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:22:48.431 00:22:48.431 --- 10.0.0.1 ping statistics --- 00:22:48.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.431 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2622660 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2622660 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2622660 ']' 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.431 14:27:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.431 [2024-07-12 14:27:40.429631] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:22:48.431 [2024-07-12 14:27:40.429675] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.689 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.689 [2024-07-12 14:27:40.488314] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.689 [2024-07-12 14:27:40.568455] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.689 [2024-07-12 14:27:40.568489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.689 [2024-07-12 14:27:40.568496] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.689 [2024-07-12 14:27:40.568503] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.689 [2024-07-12 14:27:40.568508] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.689 [2024-07-12 14:27:40.568525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.255 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:49.255 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:49.255 14:27:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:49.255 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:49.255 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:49.514 [2024-07-12 14:27:41.276202] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:49.514 null0 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g efb5b02c92474a9d85bf4539698c1842 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:49.514 [2024-07-12 14:27:41.316388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.514 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:49.774 nvme0n1 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:49.774 [ 00:22:49.774 { 00:22:49.774 "name": "nvme0n1", 00:22:49.774 "aliases": [ 00:22:49.774 "efb5b02c-9247-4a9d-85bf-4539698c1842" 00:22:49.774 ], 00:22:49.774 "product_name": "NVMe disk", 00:22:49.774 "block_size": 512, 00:22:49.774 "num_blocks": 2097152, 00:22:49.774 "uuid": "efb5b02c-9247-4a9d-85bf-4539698c1842", 00:22:49.774 "assigned_rate_limits": { 00:22:49.774 "rw_ios_per_sec": 0, 00:22:49.774 "rw_mbytes_per_sec": 0, 00:22:49.774 "r_mbytes_per_sec": 0, 00:22:49.774 "w_mbytes_per_sec": 0 00:22:49.774 }, 00:22:49.774 "claimed": false, 00:22:49.774 "zoned": false, 00:22:49.774 "supported_io_types": { 00:22:49.774 "read": true, 00:22:49.774 "write": true, 00:22:49.774 "unmap": false, 00:22:49.774 "flush": true, 00:22:49.774 "reset": true, 00:22:49.774 "nvme_admin": true, 00:22:49.774 "nvme_io": true, 00:22:49.774 "nvme_io_md": false, 00:22:49.774 "write_zeroes": true, 00:22:49.774 "zcopy": false, 00:22:49.774 "get_zone_info": false, 00:22:49.774 "zone_management": false, 00:22:49.774 "zone_append": false, 00:22:49.774 "compare": true, 00:22:49.774 "compare_and_write": true, 00:22:49.774 "abort": true, 00:22:49.774 "seek_hole": false, 00:22:49.774 "seek_data": false, 00:22:49.774 "copy": true, 00:22:49.774 "nvme_iov_md": false 00:22:49.774 }, 00:22:49.774 "memory_domains": [ 00:22:49.774 { 00:22:49.774 "dma_device_id": "system", 00:22:49.774 "dma_device_type": 1 00:22:49.774 } 00:22:49.774 ], 00:22:49.774 "driver_specific": { 00:22:49.774 "nvme": [ 00:22:49.774 { 00:22:49.774 "trid": { 00:22:49.774 "trtype": "TCP", 00:22:49.774 "adrfam": "IPv4", 00:22:49.774 "traddr": "10.0.0.2", 00:22:49.774 "trsvcid": "4420", 00:22:49.774 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:49.774 }, 00:22:49.774 "ctrlr_data": { 00:22:49.774 "cntlid": 1, 00:22:49.774 "vendor_id": "0x8086", 00:22:49.774 "model_number": "SPDK bdev Controller", 00:22:49.774 "serial_number": "00000000000000000000", 00:22:49.774 "firmware_revision": "24.09", 00:22:49.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:49.774 "oacs": { 00:22:49.774 "security": 0, 00:22:49.774 "format": 0, 00:22:49.774 "firmware": 0, 00:22:49.774 "ns_manage": 0 00:22:49.774 }, 00:22:49.774 "multi_ctrlr": true, 00:22:49.774 "ana_reporting": false 00:22:49.774 }, 00:22:49.774 "vs": { 00:22:49.774 "nvme_version": "1.3" 00:22:49.774 }, 00:22:49.774 "ns_data": { 00:22:49.774 "id": 1, 00:22:49.774 "can_share": true 00:22:49.774 } 00:22:49.774 } 00:22:49.774 ], 00:22:49.774 "mp_policy": "active_passive" 00:22:49.774 } 00:22:49.774 } 00:22:49.774 ] 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:49.774 [2024-07-12 14:27:41.572950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.774 [2024-07-12 14:27:41.573002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef9250 (9): Bad file descriptor 00:22:49.774 [2024-07-12 14:27:41.704462] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:49.774 [ 00:22:49.774 { 00:22:49.774 "name": "nvme0n1", 00:22:49.774 "aliases": [ 00:22:49.774 "efb5b02c-9247-4a9d-85bf-4539698c1842" 00:22:49.774 ], 00:22:49.774 "product_name": "NVMe disk", 00:22:49.774 "block_size": 512, 00:22:49.774 "num_blocks": 2097152, 00:22:49.774 "uuid": "efb5b02c-9247-4a9d-85bf-4539698c1842", 00:22:49.774 "assigned_rate_limits": { 00:22:49.774 "rw_ios_per_sec": 0, 00:22:49.774 "rw_mbytes_per_sec": 0, 00:22:49.774 "r_mbytes_per_sec": 0, 00:22:49.774 "w_mbytes_per_sec": 0 00:22:49.774 }, 00:22:49.774 "claimed": false, 00:22:49.774 "zoned": false, 00:22:49.774 "supported_io_types": { 00:22:49.774 "read": true, 00:22:49.774 "write": true, 00:22:49.774 "unmap": false, 00:22:49.774 "flush": true, 00:22:49.774 "reset": true, 00:22:49.774 "nvme_admin": true, 00:22:49.774 "nvme_io": true, 00:22:49.774 "nvme_io_md": false, 00:22:49.774 "write_zeroes": true, 00:22:49.774 "zcopy": false, 00:22:49.774 "get_zone_info": false, 00:22:49.774 "zone_management": false, 00:22:49.774 "zone_append": false, 00:22:49.774 "compare": true, 00:22:49.774 "compare_and_write": true, 00:22:49.774 "abort": true, 00:22:49.774 "seek_hole": false, 00:22:49.774 "seek_data": false, 00:22:49.774 "copy": true, 00:22:49.774 "nvme_iov_md": false 00:22:49.774 }, 00:22:49.774 "memory_domains": [ 00:22:49.774 { 00:22:49.774 "dma_device_id": "system", 00:22:49.774 "dma_device_type": 1 00:22:49.774 } 00:22:49.774 ], 00:22:49.774 "driver_specific": { 00:22:49.774 "nvme": [ 00:22:49.774 { 00:22:49.774 "trid": { 00:22:49.774 "trtype": "TCP", 00:22:49.774 "adrfam": "IPv4", 00:22:49.774 "traddr": "10.0.0.2", 00:22:49.774 "trsvcid": "4420", 00:22:49.774 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:49.774 }, 00:22:49.774 "ctrlr_data": { 00:22:49.774 "cntlid": 2, 00:22:49.774 "vendor_id": "0x8086", 00:22:49.774 "model_number": "SPDK bdev Controller", 00:22:49.774 "serial_number": "00000000000000000000", 00:22:49.774 "firmware_revision": "24.09", 00:22:49.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:49.774 "oacs": { 00:22:49.774 "security": 0, 00:22:49.774 "format": 0, 00:22:49.774 "firmware": 0, 00:22:49.774 "ns_manage": 0 00:22:49.774 }, 00:22:49.774 "multi_ctrlr": true, 00:22:49.774 "ana_reporting": false 00:22:49.774 }, 00:22:49.774 "vs": { 00:22:49.774 "nvme_version": "1.3" 00:22:49.774 }, 00:22:49.774 "ns_data": { 00:22:49.774 "id": 1, 00:22:49.774 "can_share": true 00:22:49.774 } 00:22:49.774 } 00:22:49.774 ], 00:22:49.774 "mp_policy": "active_passive" 00:22:49.774 } 00:22:49.774 } 00:22:49.774 ] 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.TsPXITZUss 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.TsPXITZUss 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:49.774 [2024-07-12 14:27:41.765517] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:49.774 [2024-07-12 14:27:41.765611] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TsPXITZUss 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:49.774 [2024-07-12 14:27:41.773533] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TsPXITZUss 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.774 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:49.774 [2024-07-12 14:27:41.781568] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.774 [2024-07-12 14:27:41.781600] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:50.033 nvme0n1 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:50.033 [ 00:22:50.033 { 00:22:50.033 "name": "nvme0n1", 00:22:50.033 "aliases": [ 00:22:50.033 "efb5b02c-9247-4a9d-85bf-4539698c1842" 00:22:50.033 ], 00:22:50.033 "product_name": "NVMe disk", 00:22:50.033 "block_size": 512, 00:22:50.033 "num_blocks": 2097152, 00:22:50.033 "uuid": "efb5b02c-9247-4a9d-85bf-4539698c1842", 00:22:50.033 "assigned_rate_limits": { 00:22:50.033 "rw_ios_per_sec": 0, 00:22:50.033 "rw_mbytes_per_sec": 0, 00:22:50.033 "r_mbytes_per_sec": 0, 00:22:50.033 "w_mbytes_per_sec": 0 00:22:50.033 }, 00:22:50.033 "claimed": false, 00:22:50.033 "zoned": false, 00:22:50.033 "supported_io_types": { 00:22:50.033 "read": true, 00:22:50.033 "write": true, 00:22:50.033 "unmap": false, 00:22:50.033 "flush": true, 00:22:50.033 "reset": true, 00:22:50.033 "nvme_admin": true, 00:22:50.033 "nvme_io": true, 00:22:50.033 "nvme_io_md": false, 00:22:50.033 "write_zeroes": true, 00:22:50.033 "zcopy": false, 00:22:50.033 "get_zone_info": false, 00:22:50.033 "zone_management": false, 00:22:50.033 "zone_append": false, 00:22:50.033 "compare": true, 00:22:50.033 "compare_and_write": true, 00:22:50.033 "abort": true, 00:22:50.033 "seek_hole": false, 00:22:50.033 "seek_data": false, 00:22:50.033 "copy": true, 00:22:50.033 "nvme_iov_md": false 00:22:50.033 }, 00:22:50.033 "memory_domains": [ 00:22:50.033 { 00:22:50.033 "dma_device_id": "system", 00:22:50.033 "dma_device_type": 1 00:22:50.033 } 00:22:50.033 ], 00:22:50.033 "driver_specific": { 00:22:50.033 "nvme": [ 00:22:50.033 { 00:22:50.033 "trid": { 00:22:50.033 "trtype": "TCP", 00:22:50.033 "adrfam": "IPv4", 00:22:50.033 "traddr": "10.0.0.2", 00:22:50.033 "trsvcid": "4421", 00:22:50.033 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:50.033 }, 00:22:50.033 "ctrlr_data": { 00:22:50.033 "cntlid": 3, 00:22:50.033 "vendor_id": "0x8086", 00:22:50.033 "model_number": "SPDK bdev Controller", 00:22:50.033 "serial_number": "00000000000000000000", 00:22:50.033 "firmware_revision": "24.09", 00:22:50.033 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:50.033 "oacs": { 00:22:50.033 "security": 0, 00:22:50.033 "format": 0, 00:22:50.033 "firmware": 0, 00:22:50.033 "ns_manage": 0 00:22:50.033 }, 00:22:50.033 "multi_ctrlr": true, 00:22:50.033 "ana_reporting": false 00:22:50.033 }, 00:22:50.033 "vs": { 00:22:50.033 "nvme_version": "1.3" 00:22:50.033 }, 00:22:50.033 "ns_data": { 00:22:50.033 "id": 1, 00:22:50.033 "can_share": true 00:22:50.033 } 00:22:50.033 } 00:22:50.033 ], 00:22:50.033 "mp_policy": "active_passive" 00:22:50.033 } 00:22:50.033 } 00:22:50.033 ] 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.TsPXITZUss 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.033 rmmod nvme_tcp 00:22:50.033 rmmod nvme_fabrics 00:22:50.033 rmmod nvme_keyring 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2622660 ']' 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2622660 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2622660 ']' 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2622660 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.033 14:27:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2622660 00:22:50.033 14:27:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:50.033 14:27:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:50.033 14:27:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2622660' 00:22:50.033 killing process with pid 2622660 00:22:50.033 14:27:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2622660 00:22:50.033 [2024-07-12 14:27:42.008031] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:50.033 [2024-07-12 14:27:42.008054] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:50.033 14:27:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2622660 00:22:50.292 14:27:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:50.292 14:27:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:50.292 14:27:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:50.292 14:27:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.292 14:27:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.292 14:27:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.292 14:27:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.292 14:27:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.828 14:27:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:52.828 00:22:52.828 real 0m9.346s 00:22:52.828 user 0m3.501s 00:22:52.828 sys 0m4.393s 00:22:52.828 14:27:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:52.828 14:27:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:52.828 ************************************ 00:22:52.828 END TEST nvmf_async_init 00:22:52.828 ************************************ 00:22:52.828 14:27:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:52.828 14:27:44 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:52.828 14:27:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:52.828 14:27:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:52.828 14:27:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:52.828 ************************************ 00:22:52.828 START TEST dma 00:22:52.828 ************************************ 00:22:52.828 14:27:44 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:52.828 * Looking for test storage... 00:22:52.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:52.828 14:27:44 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:52.828 14:27:44 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.828 14:27:44 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.828 14:27:44 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.828 14:27:44 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.828 14:27:44 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.828 14:27:44 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.828 14:27:44 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:22:52.828 14:27:44 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:52.828 14:27:44 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:52.828 14:27:44 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:52.829 14:27:44 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:22:52.829 00:22:52.829 real 0m0.119s 00:22:52.829 user 0m0.055s 00:22:52.829 sys 0m0.072s 00:22:52.829 14:27:44 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:52.829 14:27:44 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:22:52.829 ************************************ 00:22:52.829 END TEST dma 00:22:52.829 ************************************ 00:22:52.829 14:27:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:52.829 14:27:44 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:52.829 14:27:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:52.829 14:27:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:52.829 14:27:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:52.829 ************************************ 00:22:52.829 START TEST nvmf_identify 00:22:52.829 ************************************ 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:52.829 * Looking for test storage... 00:22:52.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:52.829 14:27:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.101 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.101 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:58.101 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:58.102 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:58.102 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:58.102 Found net devices under 0000:86:00.0: cvl_0_0 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:58.102 Found net devices under 0000:86:00.1: cvl_0_1 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:58.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:22:58.102 00:22:58.102 --- 10.0.0.2 ping statistics --- 00:22:58.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.102 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:22:58.102 00:22:58.102 --- 10.0.0.1 ping statistics --- 00:22:58.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.102 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2626463 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2626463 00:22:58.102 14:27:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2626463 ']' 00:22:58.103 14:27:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.103 14:27:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:58.103 14:27:49 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:58.103 14:27:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.103 14:27:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:58.103 14:27:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.103 [2024-07-12 14:27:49.834397] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:22:58.103 [2024-07-12 14:27:49.834446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.103 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.103 [2024-07-12 14:27:49.892204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:58.103 [2024-07-12 14:27:49.974387] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.103 [2024-07-12 14:27:49.974425] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.103 [2024-07-12 14:27:49.974433] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.103 [2024-07-12 14:27:49.974439] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.103 [2024-07-12 14:27:49.974444] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.103 [2024-07-12 14:27:49.974484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.103 [2024-07-12 14:27:49.974505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.103 [2024-07-12 14:27:49.974523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:58.103 [2024-07-12 14:27:49.974524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.669 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.669 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:58.669 14:27:50 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.669 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.669 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.669 [2024-07-12 14:27:50.659557] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.669 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.669 14:27:50 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:58.669 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:58.669 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.928 14:27:50 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:58.928 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.928 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.929 Malloc0 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.929 [2024-07-12 14:27:50.743300] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.929 [ 00:22:58.929 { 00:22:58.929 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:58.929 "subtype": "Discovery", 00:22:58.929 "listen_addresses": [ 00:22:58.929 { 00:22:58.929 "trtype": "TCP", 00:22:58.929 "adrfam": "IPv4", 00:22:58.929 "traddr": "10.0.0.2", 00:22:58.929 "trsvcid": "4420" 00:22:58.929 } 00:22:58.929 ], 00:22:58.929 "allow_any_host": true, 00:22:58.929 "hosts": [] 00:22:58.929 }, 00:22:58.929 { 00:22:58.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.929 "subtype": "NVMe", 00:22:58.929 "listen_addresses": [ 00:22:58.929 { 00:22:58.929 "trtype": "TCP", 00:22:58.929 "adrfam": "IPv4", 00:22:58.929 "traddr": "10.0.0.2", 00:22:58.929 "trsvcid": "4420" 00:22:58.929 } 00:22:58.929 ], 00:22:58.929 "allow_any_host": true, 00:22:58.929 "hosts": [], 00:22:58.929 "serial_number": "SPDK00000000000001", 00:22:58.929 "model_number": "SPDK bdev Controller", 00:22:58.929 "max_namespaces": 32, 00:22:58.929 "min_cntlid": 1, 00:22:58.929 "max_cntlid": 65519, 00:22:58.929 "namespaces": [ 00:22:58.929 { 00:22:58.929 "nsid": 1, 00:22:58.929 "bdev_name": "Malloc0", 00:22:58.929 "name": "Malloc0", 00:22:58.929 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:58.929 "eui64": "ABCDEF0123456789", 00:22:58.929 "uuid": "63751ce3-3dff-40b7-8f28-da0184634ab5" 00:22:58.929 } 00:22:58.929 ] 00:22:58.929 } 00:22:58.929 ] 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.929 14:27:50 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:58.929 [2024-07-12 14:27:50.796099] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:22:58.929 [2024-07-12 14:27:50.796148] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2626712 ] 00:22:58.929 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.929 [2024-07-12 14:27:50.823933] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:58.929 [2024-07-12 14:27:50.823982] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:58.929 [2024-07-12 14:27:50.823987] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:58.929 [2024-07-12 14:27:50.823997] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:58.929 [2024-07-12 14:27:50.824003] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:58.929 [2024-07-12 14:27:50.824347] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:58.929 [2024-07-12 14:27:50.824385] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x6f8ec0 0 00:22:58.929 [2024-07-12 14:27:50.838398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:58.929 [2024-07-12 14:27:50.838417] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:58.929 [2024-07-12 14:27:50.838421] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:58.929 [2024-07-12 14:27:50.838424] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:58.929 [2024-07-12 14:27:50.838463] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.929 [2024-07-12 14:27:50.838469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.929 [2024-07-12 14:27:50.838472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f8ec0) 00:22:58.929 [2024-07-12 14:27:50.838485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:58.929 [2024-07-12 14:27:50.838503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77be40, cid 0, qid 0 00:22:58.929 [2024-07-12 14:27:50.846388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.929 [2024-07-12 14:27:50.846396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.929 [2024-07-12 14:27:50.846399] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.929 [2024-07-12 14:27:50.846402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77be40) on tqpair=0x6f8ec0 00:22:58.929 [2024-07-12 14:27:50.846414] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:58.929 [2024-07-12 14:27:50.846420] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:58.929 [2024-07-12 14:27:50.846425] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:58.929 [2024-07-12 14:27:50.846436] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.929 [2024-07-12 14:27:50.846440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.929 [2024-07-12 14:27:50.846443] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f8ec0) 00:22:58.929 [2024-07-12 14:27:50.846450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.929 [2024-07-12 14:27:50.846462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77be40, cid 0, qid 0 00:22:58.929 [2024-07-12 14:27:50.846625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.929 [2024-07-12 14:27:50.846631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.929 [2024-07-12 14:27:50.846634] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.929 [2024-07-12 14:27:50.846638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77be40) on tqpair=0x6f8ec0 00:22:58.929 [2024-07-12 14:27:50.846642] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:58.929 [2024-07-12 14:27:50.846649] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:58.929 [2024-07-12 14:27:50.846657] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.929 [2024-07-12 14:27:50.846661] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.929 [2024-07-12 14:27:50.846664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f8ec0) 00:22:58.929 [2024-07-12 14:27:50.846669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.929 [2024-07-12 14:27:50.846679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77be40, cid 0, qid 0 00:22:58.929 [2024-07-12 14:27:50.846744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.929 [2024-07-12 14:27:50.846749] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.929 [2024-07-12 14:27:50.846752] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.846755] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77be40) on tqpair=0x6f8ec0 00:22:58.930 [2024-07-12 14:27:50.846759] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:58.930 [2024-07-12 14:27:50.846766] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:58.930 [2024-07-12 14:27:50.846772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.846775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.846778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f8ec0) 00:22:58.930 [2024-07-12 14:27:50.846783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.930 [2024-07-12 14:27:50.846793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77be40, cid 0, qid 0 00:22:58.930 [2024-07-12 14:27:50.846855] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.930 [2024-07-12 14:27:50.846860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.930 [2024-07-12 14:27:50.846863] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.846866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77be40) on tqpair=0x6f8ec0 00:22:58.930 [2024-07-12 14:27:50.846871] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:58.930 [2024-07-12 14:27:50.846878] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.846882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.846885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f8ec0) 00:22:58.930 [2024-07-12 14:27:50.846890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.930 [2024-07-12 14:27:50.846899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77be40, cid 0, qid 0 00:22:58.930 [2024-07-12 14:27:50.846970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.930 [2024-07-12 14:27:50.846975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.930 [2024-07-12 14:27:50.846978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.846981] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77be40) on tqpair=0x6f8ec0 00:22:58.930 [2024-07-12 14:27:50.846985] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:58.930 [2024-07-12 14:27:50.846989] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:58.930 [2024-07-12 14:27:50.846995] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:58.930 [2024-07-12 14:27:50.847102] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:58.930 [2024-07-12 14:27:50.847107] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:58.930 [2024-07-12 14:27:50.847113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.847117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.847120] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f8ec0) 00:22:58.930 [2024-07-12 14:27:50.847125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.930 [2024-07-12 14:27:50.847134] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77be40, cid 0, qid 0 00:22:58.930 [2024-07-12 14:27:50.847200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.930 [2024-07-12 14:27:50.847206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.930 [2024-07-12 14:27:50.847209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.847212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77be40) on tqpair=0x6f8ec0 00:22:58.930 [2024-07-12 14:27:50.847216] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:58.930 [2024-07-12 14:27:50.847223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.847226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.847230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f8ec0) 00:22:58.930 [2024-07-12 14:27:50.847235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.930 [2024-07-12 14:27:50.847244] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77be40, cid 0, qid 0 00:22:58.930 [2024-07-12 14:27:50.847311] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.930 [2024-07-12 14:27:50.847317] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.930 [2024-07-12 14:27:50.847320] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.847323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77be40) on tqpair=0x6f8ec0 00:22:58.930 [2024-07-12 14:27:50.847327] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:58.930 [2024-07-12 14:27:50.847331] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:58.930 [2024-07-12 14:27:50.847337] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:58.930 [2024-07-12 14:27:50.847348] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:58.930 [2024-07-12 14:27:50.847356] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.847360] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f8ec0) 00:22:58.930 [2024-07-12 14:27:50.847365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.930 [2024-07-12 14:27:50.847375] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77be40, cid 0, qid 0 00:22:58.930 [2024-07-12 14:27:50.847471] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.930 [2024-07-12 14:27:50.847477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.930 [2024-07-12 14:27:50.847480] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.847483] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6f8ec0): datao=0, datal=4096, cccid=0 00:22:58.930 [2024-07-12 14:27:50.847489] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x77be40) on tqpair(0x6f8ec0): expected_datao=0, payload_size=4096 00:22:58.930 [2024-07-12 14:27:50.847493] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.847509] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.847513] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.890383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.930 [2024-07-12 14:27:50.890393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.930 [2024-07-12 14:27:50.890396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.890400] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77be40) on tqpair=0x6f8ec0 00:22:58.930 [2024-07-12 14:27:50.890407] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:58.930 [2024-07-12 14:27:50.890414] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:58.930 [2024-07-12 14:27:50.890418] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:58.930 [2024-07-12 14:27:50.890422] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:58.930 [2024-07-12 14:27:50.890426] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:58.930 [2024-07-12 14:27:50.890430] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:58.930 [2024-07-12 14:27:50.890438] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:58.930 [2024-07-12 14:27:50.890445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.890448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.890451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f8ec0) 00:22:58.930 [2024-07-12 14:27:50.890458] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:58.930 [2024-07-12 14:27:50.890470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77be40, cid 0, qid 0 00:22:58.930 [2024-07-12 14:27:50.890631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.930 [2024-07-12 14:27:50.890637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.930 [2024-07-12 14:27:50.890640] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.890643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77be40) on tqpair=0x6f8ec0 00:22:58.930 [2024-07-12 14:27:50.890650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.890653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.890656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f8ec0) 00:22:58.930 [2024-07-12 14:27:50.890662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.930 [2024-07-12 14:27:50.890667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.890670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.890673] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x6f8ec0) 00:22:58.930 [2024-07-12 14:27:50.890677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.930 [2024-07-12 14:27:50.890682] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.890685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.890691] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x6f8ec0) 00:22:58.930 [2024-07-12 14:27:50.890696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.930 [2024-07-12 14:27:50.890701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.890704] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.890707] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:58.930 [2024-07-12 14:27:50.890711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.930 [2024-07-12 14:27:50.890715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:58.930 [2024-07-12 14:27:50.890725] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:58.930 [2024-07-12 14:27:50.890731] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.930 [2024-07-12 14:27:50.890734] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6f8ec0) 00:22:58.930 [2024-07-12 14:27:50.890739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.931 [2024-07-12 14:27:50.890750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77be40, cid 0, qid 0 00:22:58.931 [2024-07-12 14:27:50.890755] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77bfc0, cid 1, qid 0 00:22:58.931 [2024-07-12 14:27:50.890759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c140, cid 2, qid 0 00:22:58.931 [2024-07-12 14:27:50.890763] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:58.931 [2024-07-12 14:27:50.890767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c440, cid 4, qid 0 00:22:58.931 [2024-07-12 14:27:50.890868] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.931 [2024-07-12 14:27:50.890874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.931 [2024-07-12 14:27:50.890877] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.890880] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c440) on tqpair=0x6f8ec0 00:22:58.931 [2024-07-12 14:27:50.890884] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:58.931 [2024-07-12 14:27:50.890888] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:58.931 [2024-07-12 14:27:50.890898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.890901] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6f8ec0) 00:22:58.931 [2024-07-12 14:27:50.890907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.931 [2024-07-12 14:27:50.890916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c440, cid 4, qid 0 00:22:58.931 [2024-07-12 14:27:50.890991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.931 [2024-07-12 14:27:50.890997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.931 [2024-07-12 14:27:50.891000] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.891003] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6f8ec0): datao=0, datal=4096, cccid=4 00:22:58.931 [2024-07-12 14:27:50.891007] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x77c440) on tqpair(0x6f8ec0): expected_datao=0, payload_size=4096 00:22:58.931 [2024-07-12 14:27:50.891010] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.891030] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.891035] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.891069] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.931 [2024-07-12 14:27:50.891075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.931 [2024-07-12 14:27:50.891077] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.891080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c440) on tqpair=0x6f8ec0 00:22:58.931 [2024-07-12 14:27:50.891091] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:58.931 [2024-07-12 14:27:50.891111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.891115] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6f8ec0) 00:22:58.931 [2024-07-12 14:27:50.891121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.931 [2024-07-12 14:27:50.891126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.891130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.891133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6f8ec0) 00:22:58.931 [2024-07-12 14:27:50.891137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.931 [2024-07-12 14:27:50.891150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c440, cid 4, qid 0 00:22:58.931 [2024-07-12 14:27:50.891154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c5c0, cid 5, qid 0 00:22:58.931 [2024-07-12 14:27:50.891253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.931 [2024-07-12 14:27:50.891259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.931 [2024-07-12 14:27:50.891261] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.891264] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6f8ec0): datao=0, datal=1024, cccid=4 00:22:58.931 [2024-07-12 14:27:50.891268] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x77c440) on tqpair(0x6f8ec0): expected_datao=0, payload_size=1024 00:22:58.931 [2024-07-12 14:27:50.891271] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.891277] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.891280] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.891285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.931 [2024-07-12 14:27:50.891289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.931 [2024-07-12 14:27:50.891292] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.891295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c5c0) on tqpair=0x6f8ec0 00:22:58.931 [2024-07-12 14:27:50.931546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.931 [2024-07-12 14:27:50.931557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.931 [2024-07-12 14:27:50.931560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.931563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c440) on tqpair=0x6f8ec0 00:22:58.931 [2024-07-12 14:27:50.931578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.931582] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6f8ec0) 00:22:58.931 [2024-07-12 14:27:50.931589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.931 [2024-07-12 14:27:50.931604] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c440, cid 4, qid 0 00:22:58.931 [2024-07-12 14:27:50.931711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.931 [2024-07-12 14:27:50.931720] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.931 [2024-07-12 14:27:50.931724] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.931727] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6f8ec0): datao=0, datal=3072, cccid=4 00:22:58.931 [2024-07-12 14:27:50.931730] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x77c440) on tqpair(0x6f8ec0): expected_datao=0, payload_size=3072 00:22:58.931 [2024-07-12 14:27:50.931734] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.931746] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.931 [2024-07-12 14:27:50.931749] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:59.193 [2024-07-12 14:27:50.972588] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.193 [2024-07-12 14:27:50.972598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.193 [2024-07-12 14:27:50.972602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.193 [2024-07-12 14:27:50.972605] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c440) on tqpair=0x6f8ec0 00:22:59.193 [2024-07-12 14:27:50.972614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.193 [2024-07-12 14:27:50.972618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6f8ec0) 00:22:59.194 [2024-07-12 14:27:50.972625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.194 [2024-07-12 14:27:50.972639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c440, cid 4, qid 0 00:22:59.194 [2024-07-12 14:27:50.972707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:59.194 [2024-07-12 14:27:50.972712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:59.194 [2024-07-12 14:27:50.972715] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:59.194 [2024-07-12 14:27:50.972718] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6f8ec0): datao=0, datal=8, cccid=4 00:22:59.194 [2024-07-12 14:27:50.972722] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x77c440) on tqpair(0x6f8ec0): expected_datao=0, payload_size=8 00:22:59.194 [2024-07-12 14:27:50.972725] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.194 [2024-07-12 14:27:50.972732] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:59.194 [2024-07-12 14:27:50.972735] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:59.194 [2024-07-12 14:27:51.013534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.194 [2024-07-12 14:27:51.013547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.194 [2024-07-12 14:27:51.013550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.194 [2024-07-12 14:27:51.013553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c440) on tqpair=0x6f8ec0 00:22:59.194 ===================================================== 00:22:59.194 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:59.194 ===================================================== 00:22:59.194 Controller Capabilities/Features 00:22:59.194 ================================ 00:22:59.194 Vendor ID: 0000 00:22:59.194 Subsystem Vendor ID: 0000 00:22:59.194 Serial Number: .................... 00:22:59.194 Model Number: ........................................ 00:22:59.194 Firmware Version: 24.09 00:22:59.194 Recommended Arb Burst: 0 00:22:59.194 IEEE OUI Identifier: 00 00 00 00:22:59.194 Multi-path I/O 00:22:59.194 May have multiple subsystem ports: No 00:22:59.194 May have multiple controllers: No 00:22:59.194 Associated with SR-IOV VF: No 00:22:59.194 Max Data Transfer Size: 131072 00:22:59.194 Max Number of Namespaces: 0 00:22:59.194 Max Number of I/O Queues: 1024 00:22:59.194 NVMe Specification Version (VS): 1.3 00:22:59.194 NVMe Specification Version (Identify): 1.3 00:22:59.194 Maximum Queue Entries: 128 00:22:59.194 Contiguous Queues Required: Yes 00:22:59.194 Arbitration Mechanisms Supported 00:22:59.194 Weighted Round Robin: Not Supported 00:22:59.194 Vendor Specific: Not Supported 00:22:59.194 Reset Timeout: 15000 ms 00:22:59.194 Doorbell Stride: 4 bytes 00:22:59.194 NVM Subsystem Reset: Not Supported 00:22:59.194 Command Sets Supported 00:22:59.194 NVM Command Set: Supported 00:22:59.194 Boot Partition: Not Supported 00:22:59.194 Memory Page Size Minimum: 4096 bytes 00:22:59.194 Memory Page Size Maximum: 4096 bytes 00:22:59.194 Persistent Memory Region: Not Supported 00:22:59.194 Optional Asynchronous Events Supported 00:22:59.194 Namespace Attribute Notices: Not Supported 00:22:59.194 Firmware Activation Notices: Not Supported 00:22:59.194 ANA Change Notices: Not Supported 00:22:59.194 PLE Aggregate Log Change Notices: Not Supported 00:22:59.194 LBA Status Info Alert Notices: Not Supported 00:22:59.194 EGE Aggregate Log Change Notices: Not Supported 00:22:59.194 Normal NVM Subsystem Shutdown event: Not Supported 00:22:59.194 Zone Descriptor Change Notices: Not Supported 00:22:59.194 Discovery Log Change Notices: Supported 00:22:59.194 Controller Attributes 00:22:59.194 128-bit Host Identifier: Not Supported 00:22:59.194 Non-Operational Permissive Mode: Not Supported 00:22:59.194 NVM Sets: Not Supported 00:22:59.194 Read Recovery Levels: Not Supported 00:22:59.194 Endurance Groups: Not Supported 00:22:59.194 Predictable Latency Mode: Not Supported 00:22:59.194 Traffic Based Keep ALive: Not Supported 00:22:59.194 Namespace Granularity: Not Supported 00:22:59.194 SQ Associations: Not Supported 00:22:59.194 UUID List: Not Supported 00:22:59.194 Multi-Domain Subsystem: Not Supported 00:22:59.194 Fixed Capacity Management: Not Supported 00:22:59.194 Variable Capacity Management: Not Supported 00:22:59.194 Delete Endurance Group: Not Supported 00:22:59.194 Delete NVM Set: Not Supported 00:22:59.194 Extended LBA Formats Supported: Not Supported 00:22:59.194 Flexible Data Placement Supported: Not Supported 00:22:59.194 00:22:59.194 Controller Memory Buffer Support 00:22:59.194 ================================ 00:22:59.194 Supported: No 00:22:59.194 00:22:59.194 Persistent Memory Region Support 00:22:59.194 ================================ 00:22:59.194 Supported: No 00:22:59.194 00:22:59.194 Admin Command Set Attributes 00:22:59.194 ============================ 00:22:59.194 Security Send/Receive: Not Supported 00:22:59.194 Format NVM: Not Supported 00:22:59.194 Firmware Activate/Download: Not Supported 00:22:59.194 Namespace Management: Not Supported 00:22:59.194 Device Self-Test: Not Supported 00:22:59.194 Directives: Not Supported 00:22:59.194 NVMe-MI: Not Supported 00:22:59.194 Virtualization Management: Not Supported 00:22:59.194 Doorbell Buffer Config: Not Supported 00:22:59.194 Get LBA Status Capability: Not Supported 00:22:59.194 Command & Feature Lockdown Capability: Not Supported 00:22:59.194 Abort Command Limit: 1 00:22:59.194 Async Event Request Limit: 4 00:22:59.194 Number of Firmware Slots: N/A 00:22:59.194 Firmware Slot 1 Read-Only: N/A 00:22:59.194 Firmware Activation Without Reset: N/A 00:22:59.194 Multiple Update Detection Support: N/A 00:22:59.194 Firmware Update Granularity: No Information Provided 00:22:59.194 Per-Namespace SMART Log: No 00:22:59.194 Asymmetric Namespace Access Log Page: Not Supported 00:22:59.194 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:59.194 Command Effects Log Page: Not Supported 00:22:59.194 Get Log Page Extended Data: Supported 00:22:59.194 Telemetry Log Pages: Not Supported 00:22:59.194 Persistent Event Log Pages: Not Supported 00:22:59.194 Supported Log Pages Log Page: May Support 00:22:59.194 Commands Supported & Effects Log Page: Not Supported 00:22:59.194 Feature Identifiers & Effects Log Page:May Support 00:22:59.194 NVMe-MI Commands & Effects Log Page: May Support 00:22:59.194 Data Area 4 for Telemetry Log: Not Supported 00:22:59.194 Error Log Page Entries Supported: 128 00:22:59.194 Keep Alive: Not Supported 00:22:59.194 00:22:59.194 NVM Command Set Attributes 00:22:59.194 ========================== 00:22:59.194 Submission Queue Entry Size 00:22:59.194 Max: 1 00:22:59.194 Min: 1 00:22:59.194 Completion Queue Entry Size 00:22:59.194 Max: 1 00:22:59.194 Min: 1 00:22:59.194 Number of Namespaces: 0 00:22:59.194 Compare Command: Not Supported 00:22:59.194 Write Uncorrectable Command: Not Supported 00:22:59.194 Dataset Management Command: Not Supported 00:22:59.194 Write Zeroes Command: Not Supported 00:22:59.194 Set Features Save Field: Not Supported 00:22:59.194 Reservations: Not Supported 00:22:59.194 Timestamp: Not Supported 00:22:59.194 Copy: Not Supported 00:22:59.194 Volatile Write Cache: Not Present 00:22:59.194 Atomic Write Unit (Normal): 1 00:22:59.194 Atomic Write Unit (PFail): 1 00:22:59.194 Atomic Compare & Write Unit: 1 00:22:59.194 Fused Compare & Write: Supported 00:22:59.194 Scatter-Gather List 00:22:59.194 SGL Command Set: Supported 00:22:59.194 SGL Keyed: Supported 00:22:59.194 SGL Bit Bucket Descriptor: Not Supported 00:22:59.194 SGL Metadata Pointer: Not Supported 00:22:59.194 Oversized SGL: Not Supported 00:22:59.194 SGL Metadata Address: Not Supported 00:22:59.194 SGL Offset: Supported 00:22:59.194 Transport SGL Data Block: Not Supported 00:22:59.194 Replay Protected Memory Block: Not Supported 00:22:59.194 00:22:59.194 Firmware Slot Information 00:22:59.194 ========================= 00:22:59.194 Active slot: 0 00:22:59.194 00:22:59.194 00:22:59.194 Error Log 00:22:59.194 ========= 00:22:59.194 00:22:59.194 Active Namespaces 00:22:59.194 ================= 00:22:59.194 Discovery Log Page 00:22:59.194 ================== 00:22:59.194 Generation Counter: 2 00:22:59.194 Number of Records: 2 00:22:59.194 Record Format: 0 00:22:59.194 00:22:59.194 Discovery Log Entry 0 00:22:59.194 ---------------------- 00:22:59.194 Transport Type: 3 (TCP) 00:22:59.194 Address Family: 1 (IPv4) 00:22:59.194 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:59.194 Entry Flags: 00:22:59.194 Duplicate Returned Information: 1 00:22:59.194 Explicit Persistent Connection Support for Discovery: 1 00:22:59.194 Transport Requirements: 00:22:59.194 Secure Channel: Not Required 00:22:59.194 Port ID: 0 (0x0000) 00:22:59.194 Controller ID: 65535 (0xffff) 00:22:59.194 Admin Max SQ Size: 128 00:22:59.194 Transport Service Identifier: 4420 00:22:59.194 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:59.194 Transport Address: 10.0.0.2 00:22:59.194 Discovery Log Entry 1 00:22:59.194 ---------------------- 00:22:59.194 Transport Type: 3 (TCP) 00:22:59.194 Address Family: 1 (IPv4) 00:22:59.194 Subsystem Type: 2 (NVM Subsystem) 00:22:59.194 Entry Flags: 00:22:59.194 Duplicate Returned Information: 0 00:22:59.194 Explicit Persistent Connection Support for Discovery: 0 00:22:59.194 Transport Requirements: 00:22:59.194 Secure Channel: Not Required 00:22:59.194 Port ID: 0 (0x0000) 00:22:59.194 Controller ID: 65535 (0xffff) 00:22:59.195 Admin Max SQ Size: 128 00:22:59.195 Transport Service Identifier: 4420 00:22:59.195 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:59.195 Transport Address: 10.0.0.2 [2024-07-12 14:27:51.013633] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:59.195 [2024-07-12 14:27:51.013643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77be40) on tqpair=0x6f8ec0 00:22:59.195 [2024-07-12 14:27:51.013650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.195 [2024-07-12 14:27:51.013655] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77bfc0) on tqpair=0x6f8ec0 00:22:59.195 [2024-07-12 14:27:51.013659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.195 [2024-07-12 14:27:51.013663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c140) on tqpair=0x6f8ec0 00:22:59.195 [2024-07-12 14:27:51.013667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.195 [2024-07-12 14:27:51.013671] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.195 [2024-07-12 14:27:51.013676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.195 [2024-07-12 14:27:51.013686] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.013690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.013693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:59.195 [2024-07-12 14:27:51.013700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.195 [2024-07-12 14:27:51.013713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:59.195 [2024-07-12 14:27:51.013798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.195 [2024-07-12 14:27:51.013804] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.195 [2024-07-12 14:27:51.013806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.013809] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.195 [2024-07-12 14:27:51.013816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.013819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.013822] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:59.195 [2024-07-12 14:27:51.013828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.195 [2024-07-12 14:27:51.013840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:59.195 [2024-07-12 14:27:51.013925] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.195 [2024-07-12 14:27:51.013931] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.195 [2024-07-12 14:27:51.013934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.013937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.195 [2024-07-12 14:27:51.013941] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:59.195 [2024-07-12 14:27:51.013945] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:59.195 [2024-07-12 14:27:51.013952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.013956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.013959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:59.195 [2024-07-12 14:27:51.013964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.195 [2024-07-12 14:27:51.013974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:59.195 [2024-07-12 14:27:51.014042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.195 [2024-07-12 14:27:51.014048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.195 [2024-07-12 14:27:51.014050] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.195 [2024-07-12 14:27:51.014062] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014069] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:59.195 [2024-07-12 14:27:51.014074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.195 [2024-07-12 14:27:51.014083] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:59.195 [2024-07-12 14:27:51.014153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.195 [2024-07-12 14:27:51.014161] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.195 [2024-07-12 14:27:51.014164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.195 [2024-07-12 14:27:51.014174] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014178] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014181] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:59.195 [2024-07-12 14:27:51.014187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.195 [2024-07-12 14:27:51.014195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:59.195 [2024-07-12 14:27:51.014264] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.195 [2024-07-12 14:27:51.014269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.195 [2024-07-12 14:27:51.014272] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.195 [2024-07-12 14:27:51.014283] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:59.195 [2024-07-12 14:27:51.014295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.195 [2024-07-12 14:27:51.014304] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:59.195 [2024-07-12 14:27:51.014373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.195 [2024-07-12 14:27:51.014384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.195 [2024-07-12 14:27:51.014387] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014390] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.195 [2024-07-12 14:27:51.014398] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014405] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:59.195 [2024-07-12 14:27:51.014410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.195 [2024-07-12 14:27:51.014419] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:59.195 [2024-07-12 14:27:51.014496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.195 [2024-07-12 14:27:51.014502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.195 [2024-07-12 14:27:51.014505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014508] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.195 [2024-07-12 14:27:51.014516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014522] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:59.195 [2024-07-12 14:27:51.014528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.195 [2024-07-12 14:27:51.014537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:59.195 [2024-07-12 14:27:51.014624] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.195 [2024-07-12 14:27:51.014629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.195 [2024-07-12 14:27:51.014634] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.195 [2024-07-12 14:27:51.014646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:59.195 [2024-07-12 14:27:51.014658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.195 [2024-07-12 14:27:51.014668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:59.195 [2024-07-12 14:27:51.014730] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.195 [2024-07-12 14:27:51.014735] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.195 [2024-07-12 14:27:51.014738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.195 [2024-07-12 14:27:51.014749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014752] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:59.195 [2024-07-12 14:27:51.014761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.195 [2024-07-12 14:27:51.014770] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:59.195 [2024-07-12 14:27:51.014840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.195 [2024-07-12 14:27:51.014845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.195 [2024-07-12 14:27:51.014848] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.195 [2024-07-12 14:27:51.014859] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.195 [2024-07-12 14:27:51.014866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:59.195 [2024-07-12 14:27:51.014871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.195 [2024-07-12 14:27:51.014880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:59.195 [2024-07-12 14:27:51.014949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.196 [2024-07-12 14:27:51.014954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.196 [2024-07-12 14:27:51.014957] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.014960] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.196 [2024-07-12 14:27:51.014968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.014971] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.014974] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:59.196 [2024-07-12 14:27:51.014980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.196 [2024-07-12 14:27:51.014989] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:59.196 [2024-07-12 14:27:51.015065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.196 [2024-07-12 14:27:51.015070] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.196 [2024-07-12 14:27:51.015073] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.015078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.196 [2024-07-12 14:27:51.015087] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.015090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.015093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:59.196 [2024-07-12 14:27:51.015098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.196 [2024-07-12 14:27:51.015108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:59.196 [2024-07-12 14:27:51.015169] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.196 [2024-07-12 14:27:51.015175] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.196 [2024-07-12 14:27:51.015177] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.015180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.196 [2024-07-12 14:27:51.015188] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.015192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.015195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:59.196 [2024-07-12 14:27:51.015200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.196 [2024-07-12 14:27:51.015209] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:59.196 [2024-07-12 14:27:51.015272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.196 [2024-07-12 14:27:51.015277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.196 [2024-07-12 14:27:51.015280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.015283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.196 [2024-07-12 14:27:51.015291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.015294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.015297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:59.196 [2024-07-12 14:27:51.015303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.196 [2024-07-12 14:27:51.015311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:59.196 [2024-07-12 14:27:51.019383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.196 [2024-07-12 14:27:51.019392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.196 [2024-07-12 14:27:51.019395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.019399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.196 [2024-07-12 14:27:51.019409] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.019413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.019416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f8ec0) 00:22:59.196 [2024-07-12 14:27:51.019422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.196 [2024-07-12 14:27:51.019433] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x77c2c0, cid 3, qid 0 00:22:59.196 [2024-07-12 14:27:51.019589] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.196 [2024-07-12 14:27:51.019595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.196 [2024-07-12 14:27:51.019597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.019600] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x77c2c0) on tqpair=0x6f8ec0 00:22:59.196 [2024-07-12 14:27:51.019611] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:22:59.196 00:22:59.196 14:27:51 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:59.196 [2024-07-12 14:27:51.058296] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:22:59.196 [2024-07-12 14:27:51.058344] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2626714 ] 00:22:59.196 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.196 [2024-07-12 14:27:51.086612] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:59.196 [2024-07-12 14:27:51.086655] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:59.196 [2024-07-12 14:27:51.086660] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:59.196 [2024-07-12 14:27:51.086669] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:59.196 [2024-07-12 14:27:51.086675] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:59.196 [2024-07-12 14:27:51.086980] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:59.196 [2024-07-12 14:27:51.087002] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x88dec0 0 00:22:59.196 [2024-07-12 14:27:51.101384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:59.196 [2024-07-12 14:27:51.101395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:59.196 [2024-07-12 14:27:51.101398] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:59.196 [2024-07-12 14:27:51.101401] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:59.196 [2024-07-12 14:27:51.101428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.101433] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.101436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88dec0) 00:22:59.196 [2024-07-12 14:27:51.101446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:59.196 [2024-07-12 14:27:51.101460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x910e40, cid 0, qid 0 00:22:59.196 [2024-07-12 14:27:51.109388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.196 [2024-07-12 14:27:51.109396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.196 [2024-07-12 14:27:51.109399] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.109402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x910e40) on tqpair=0x88dec0 00:22:59.196 [2024-07-12 14:27:51.109412] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:59.196 [2024-07-12 14:27:51.109418] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:59.196 [2024-07-12 14:27:51.109423] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:59.196 [2024-07-12 14:27:51.109433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.109436] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.109439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88dec0) 00:22:59.196 [2024-07-12 14:27:51.109448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.196 [2024-07-12 14:27:51.109460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x910e40, cid 0, qid 0 00:22:59.196 [2024-07-12 14:27:51.109627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.196 [2024-07-12 14:27:51.109633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.196 [2024-07-12 14:27:51.109636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.109639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x910e40) on tqpair=0x88dec0 00:22:59.196 [2024-07-12 14:27:51.109644] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:59.196 [2024-07-12 14:27:51.109650] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:59.196 [2024-07-12 14:27:51.109657] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.109660] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.109663] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88dec0) 00:22:59.196 [2024-07-12 14:27:51.109669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.196 [2024-07-12 14:27:51.109679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x910e40, cid 0, qid 0 00:22:59.196 [2024-07-12 14:27:51.109746] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.196 [2024-07-12 14:27:51.109751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.196 [2024-07-12 14:27:51.109754] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.109758] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x910e40) on tqpair=0x88dec0 00:22:59.196 [2024-07-12 14:27:51.109762] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:59.196 [2024-07-12 14:27:51.109769] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:59.196 [2024-07-12 14:27:51.109775] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.109778] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.196 [2024-07-12 14:27:51.109781] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88dec0) 00:22:59.196 [2024-07-12 14:27:51.109787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.196 [2024-07-12 14:27:51.109796] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x910e40, cid 0, qid 0 00:22:59.196 [2024-07-12 14:27:51.109860] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.196 [2024-07-12 14:27:51.109865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.196 [2024-07-12 14:27:51.109868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.109872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x910e40) on tqpair=0x88dec0 00:22:59.197 [2024-07-12 14:27:51.109876] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:59.197 [2024-07-12 14:27:51.109884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.109887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.109891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88dec0) 00:22:59.197 [2024-07-12 14:27:51.109896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.197 [2024-07-12 14:27:51.109905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x910e40, cid 0, qid 0 00:22:59.197 [2024-07-12 14:27:51.109998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.197 [2024-07-12 14:27:51.110005] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.197 [2024-07-12 14:27:51.110008] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110012] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x910e40) on tqpair=0x88dec0 00:22:59.197 [2024-07-12 14:27:51.110016] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:59.197 [2024-07-12 14:27:51.110020] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:59.197 [2024-07-12 14:27:51.110027] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:59.197 [2024-07-12 14:27:51.110132] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:59.197 [2024-07-12 14:27:51.110135] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:59.197 [2024-07-12 14:27:51.110142] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88dec0) 00:22:59.197 [2024-07-12 14:27:51.110153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.197 [2024-07-12 14:27:51.110163] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x910e40, cid 0, qid 0 00:22:59.197 [2024-07-12 14:27:51.110229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.197 [2024-07-12 14:27:51.110235] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.197 [2024-07-12 14:27:51.110238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x910e40) on tqpair=0x88dec0 00:22:59.197 [2024-07-12 14:27:51.110245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:59.197 [2024-07-12 14:27:51.110253] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110257] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88dec0) 00:22:59.197 [2024-07-12 14:27:51.110265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.197 [2024-07-12 14:27:51.110274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x910e40, cid 0, qid 0 00:22:59.197 [2024-07-12 14:27:51.110338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.197 [2024-07-12 14:27:51.110344] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.197 [2024-07-12 14:27:51.110347] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110350] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x910e40) on tqpair=0x88dec0 00:22:59.197 [2024-07-12 14:27:51.110353] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:59.197 [2024-07-12 14:27:51.110358] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:59.197 [2024-07-12 14:27:51.110364] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:59.197 [2024-07-12 14:27:51.110370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:59.197 [2024-07-12 14:27:51.110383] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110389] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88dec0) 00:22:59.197 [2024-07-12 14:27:51.110395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.197 [2024-07-12 14:27:51.110405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x910e40, cid 0, qid 0 00:22:59.197 [2024-07-12 14:27:51.110507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:59.197 [2024-07-12 14:27:51.110513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:59.197 [2024-07-12 14:27:51.110516] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110519] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88dec0): datao=0, datal=4096, cccid=0 00:22:59.197 [2024-07-12 14:27:51.110523] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x910e40) on tqpair(0x88dec0): expected_datao=0, payload_size=4096 00:22:59.197 [2024-07-12 14:27:51.110527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110533] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110537] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110572] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.197 [2024-07-12 14:27:51.110577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.197 [2024-07-12 14:27:51.110580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x910e40) on tqpair=0x88dec0 00:22:59.197 [2024-07-12 14:27:51.110589] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:59.197 [2024-07-12 14:27:51.110595] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:59.197 [2024-07-12 14:27:51.110599] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:59.197 [2024-07-12 14:27:51.110603] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:59.197 [2024-07-12 14:27:51.110607] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:59.197 [2024-07-12 14:27:51.110611] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:59.197 [2024-07-12 14:27:51.110618] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:59.197 [2024-07-12 14:27:51.110624] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110627] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110630] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88dec0) 00:22:59.197 [2024-07-12 14:27:51.110636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:59.197 [2024-07-12 14:27:51.110646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x910e40, cid 0, qid 0 00:22:59.197 [2024-07-12 14:27:51.110713] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.197 [2024-07-12 14:27:51.110719] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.197 [2024-07-12 14:27:51.110722] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x910e40) on tqpair=0x88dec0 00:22:59.197 [2024-07-12 14:27:51.110730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110734] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110737] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x88dec0) 00:22:59.197 [2024-07-12 14:27:51.110742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.197 [2024-07-12 14:27:51.110749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x88dec0) 00:22:59.197 [2024-07-12 14:27:51.110761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.197 [2024-07-12 14:27:51.110766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110769] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110772] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x88dec0) 00:22:59.197 [2024-07-12 14:27:51.110777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.197 [2024-07-12 14:27:51.110782] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.197 [2024-07-12 14:27:51.110793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.197 [2024-07-12 14:27:51.110797] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:59.197 [2024-07-12 14:27:51.110806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:59.197 [2024-07-12 14:27:51.110812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88dec0) 00:22:59.197 [2024-07-12 14:27:51.110821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.197 [2024-07-12 14:27:51.110831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x910e40, cid 0, qid 0 00:22:59.197 [2024-07-12 14:27:51.110835] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x910fc0, cid 1, qid 0 00:22:59.197 [2024-07-12 14:27:51.110839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x911140, cid 2, qid 0 00:22:59.197 [2024-07-12 14:27:51.110843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.197 [2024-07-12 14:27:51.110848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x911440, cid 4, qid 0 00:22:59.197 [2024-07-12 14:27:51.110950] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.197 [2024-07-12 14:27:51.110956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.197 [2024-07-12 14:27:51.110959] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.197 [2024-07-12 14:27:51.110962] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x911440) on tqpair=0x88dec0 00:22:59.197 [2024-07-12 14:27:51.110966] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:59.197 [2024-07-12 14:27:51.110970] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:59.197 [2024-07-12 14:27:51.110977] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:59.198 [2024-07-12 14:27:51.110982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:59.198 [2024-07-12 14:27:51.110987] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.110991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.110995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88dec0) 00:22:59.198 [2024-07-12 14:27:51.111001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:59.198 [2024-07-12 14:27:51.111010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x911440, cid 4, qid 0 00:22:59.198 [2024-07-12 14:27:51.111084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.198 [2024-07-12 14:27:51.111090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.198 [2024-07-12 14:27:51.111093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.111096] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x911440) on tqpair=0x88dec0 00:22:59.198 [2024-07-12 14:27:51.111145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:59.198 [2024-07-12 14:27:51.111154] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:59.198 [2024-07-12 14:27:51.111161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.111164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88dec0) 00:22:59.198 [2024-07-12 14:27:51.111170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.198 [2024-07-12 14:27:51.111179] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x911440, cid 4, qid 0 00:22:59.198 [2024-07-12 14:27:51.111252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:59.198 [2024-07-12 14:27:51.111257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:59.198 [2024-07-12 14:27:51.111261] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.111264] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88dec0): datao=0, datal=4096, cccid=4 00:22:59.198 [2024-07-12 14:27:51.111267] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x911440) on tqpair(0x88dec0): expected_datao=0, payload_size=4096 00:22:59.198 [2024-07-12 14:27:51.111271] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.111287] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.111290] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.151502] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.198 [2024-07-12 14:27:51.151513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.198 [2024-07-12 14:27:51.151516] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.151519] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x911440) on tqpair=0x88dec0 00:22:59.198 [2024-07-12 14:27:51.151529] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:59.198 [2024-07-12 14:27:51.151537] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:59.198 [2024-07-12 14:27:51.151545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:59.198 [2024-07-12 14:27:51.151552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.151555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88dec0) 00:22:59.198 [2024-07-12 14:27:51.151561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.198 [2024-07-12 14:27:51.151572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x911440, cid 4, qid 0 00:22:59.198 [2024-07-12 14:27:51.151661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:59.198 [2024-07-12 14:27:51.151667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:59.198 [2024-07-12 14:27:51.151673] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.151676] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88dec0): datao=0, datal=4096, cccid=4 00:22:59.198 [2024-07-12 14:27:51.151679] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x911440) on tqpair(0x88dec0): expected_datao=0, payload_size=4096 00:22:59.198 [2024-07-12 14:27:51.151683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.151689] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.151692] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.151725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.198 [2024-07-12 14:27:51.151730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.198 [2024-07-12 14:27:51.151733] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.151737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x911440) on tqpair=0x88dec0 00:22:59.198 [2024-07-12 14:27:51.151747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:59.198 [2024-07-12 14:27:51.151755] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:59.198 [2024-07-12 14:27:51.151762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.151765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88dec0) 00:22:59.198 [2024-07-12 14:27:51.151771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.198 [2024-07-12 14:27:51.151781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x911440, cid 4, qid 0 00:22:59.198 [2024-07-12 14:27:51.151855] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:59.198 [2024-07-12 14:27:51.151860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:59.198 [2024-07-12 14:27:51.151863] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.151867] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88dec0): datao=0, datal=4096, cccid=4 00:22:59.198 [2024-07-12 14:27:51.151870] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x911440) on tqpair(0x88dec0): expected_datao=0, payload_size=4096 00:22:59.198 [2024-07-12 14:27:51.151874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.151889] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.151893] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.196392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.198 [2024-07-12 14:27:51.196407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.198 [2024-07-12 14:27:51.196410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.196414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x911440) on tqpair=0x88dec0 00:22:59.198 [2024-07-12 14:27:51.196423] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:59.198 [2024-07-12 14:27:51.196432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:59.198 [2024-07-12 14:27:51.196442] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:59.198 [2024-07-12 14:27:51.196448] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:59.198 [2024-07-12 14:27:51.196452] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:59.198 [2024-07-12 14:27:51.196458] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:59.198 [2024-07-12 14:27:51.196463] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:59.198 [2024-07-12 14:27:51.196467] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:59.198 [2024-07-12 14:27:51.196472] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:59.198 [2024-07-12 14:27:51.196485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.196488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88dec0) 00:22:59.198 [2024-07-12 14:27:51.196496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.198 [2024-07-12 14:27:51.196502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.196505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.198 [2024-07-12 14:27:51.196509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x88dec0) 00:22:59.198 [2024-07-12 14:27:51.196514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.198 [2024-07-12 14:27:51.196529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x911440, cid 4, qid 0 00:22:59.198 [2024-07-12 14:27:51.196533] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9115c0, cid 5, qid 0 00:22:59.198 [2024-07-12 14:27:51.196618] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.198 [2024-07-12 14:27:51.196624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.198 [2024-07-12 14:27:51.196627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.196630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x911440) on tqpair=0x88dec0 00:22:59.199 [2024-07-12 14:27:51.196636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.199 [2024-07-12 14:27:51.196641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.199 [2024-07-12 14:27:51.196644] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.196647] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9115c0) on tqpair=0x88dec0 00:22:59.199 [2024-07-12 14:27:51.196655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.196658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x88dec0) 00:22:59.199 [2024-07-12 14:27:51.196664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.199 [2024-07-12 14:27:51.196674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9115c0, cid 5, qid 0 00:22:59.199 [2024-07-12 14:27:51.196768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.199 [2024-07-12 14:27:51.196774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.199 [2024-07-12 14:27:51.196776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.196780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9115c0) on tqpair=0x88dec0 00:22:59.199 [2024-07-12 14:27:51.196788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.196792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x88dec0) 00:22:59.199 [2024-07-12 14:27:51.196797] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.199 [2024-07-12 14:27:51.196806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9115c0, cid 5, qid 0 00:22:59.199 [2024-07-12 14:27:51.196872] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.199 [2024-07-12 14:27:51.196879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.199 [2024-07-12 14:27:51.196882] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.196885] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9115c0) on tqpair=0x88dec0 00:22:59.199 [2024-07-12 14:27:51.196893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.196896] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x88dec0) 00:22:59.199 [2024-07-12 14:27:51.196902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.199 [2024-07-12 14:27:51.196910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9115c0, cid 5, qid 0 00:22:59.199 [2024-07-12 14:27:51.196978] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.199 [2024-07-12 14:27:51.196984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.199 [2024-07-12 14:27:51.196987] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.196990] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9115c0) on tqpair=0x88dec0 00:22:59.199 [2024-07-12 14:27:51.197003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x88dec0) 00:22:59.199 [2024-07-12 14:27:51.197012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.199 [2024-07-12 14:27:51.197018] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x88dec0) 00:22:59.199 [2024-07-12 14:27:51.197027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.199 [2024-07-12 14:27:51.197033] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197036] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x88dec0) 00:22:59.199 [2024-07-12 14:27:51.197042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.199 [2024-07-12 14:27:51.197048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197051] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x88dec0) 00:22:59.199 [2024-07-12 14:27:51.197056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.199 [2024-07-12 14:27:51.197066] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9115c0, cid 5, qid 0 00:22:59.199 [2024-07-12 14:27:51.197071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x911440, cid 4, qid 0 00:22:59.199 [2024-07-12 14:27:51.197075] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x911740, cid 6, qid 0 00:22:59.199 [2024-07-12 14:27:51.197079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9118c0, cid 7, qid 0 00:22:59.199 [2024-07-12 14:27:51.197225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:59.199 [2024-07-12 14:27:51.197230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:59.199 [2024-07-12 14:27:51.197234] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197237] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88dec0): datao=0, datal=8192, cccid=5 00:22:59.199 [2024-07-12 14:27:51.197240] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9115c0) on tqpair(0x88dec0): expected_datao=0, payload_size=8192 00:22:59.199 [2024-07-12 14:27:51.197244] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197269] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197273] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:59.199 [2024-07-12 14:27:51.197283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:59.199 [2024-07-12 14:27:51.197285] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197289] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88dec0): datao=0, datal=512, cccid=4 00:22:59.199 [2024-07-12 14:27:51.197293] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x911440) on tqpair(0x88dec0): expected_datao=0, payload_size=512 00:22:59.199 [2024-07-12 14:27:51.197296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197302] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197305] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197309] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:59.199 [2024-07-12 14:27:51.197314] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:59.199 [2024-07-12 14:27:51.197317] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197320] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88dec0): datao=0, datal=512, cccid=6 00:22:59.199 [2024-07-12 14:27:51.197324] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x911740) on tqpair(0x88dec0): expected_datao=0, payload_size=512 00:22:59.199 [2024-07-12 14:27:51.197328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197333] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197336] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:59.199 [2024-07-12 14:27:51.197345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:59.199 [2024-07-12 14:27:51.197348] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197351] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x88dec0): datao=0, datal=4096, cccid=7 00:22:59.199 [2024-07-12 14:27:51.197355] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9118c0) on tqpair(0x88dec0): expected_datao=0, payload_size=4096 00:22:59.199 [2024-07-12 14:27:51.197359] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197364] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197367] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.199 [2024-07-12 14:27:51.197387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.199 [2024-07-12 14:27:51.197390] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197394] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9115c0) on tqpair=0x88dec0 00:22:59.199 [2024-07-12 14:27:51.197404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.199 [2024-07-12 14:27:51.197409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.199 [2024-07-12 14:27:51.197412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x911440) on tqpair=0x88dec0 00:22:59.199 [2024-07-12 14:27:51.197423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.199 [2024-07-12 14:27:51.197428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.199 [2024-07-12 14:27:51.197431] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x911740) on tqpair=0x88dec0 00:22:59.199 [2024-07-12 14:27:51.197440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.199 [2024-07-12 14:27:51.197445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.199 [2024-07-12 14:27:51.197449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.199 [2024-07-12 14:27:51.197453] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9118c0) on tqpair=0x88dec0 00:22:59.199 ===================================================== 00:22:59.199 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.199 ===================================================== 00:22:59.199 Controller Capabilities/Features 00:22:59.199 ================================ 00:22:59.199 Vendor ID: 8086 00:22:59.199 Subsystem Vendor ID: 8086 00:22:59.199 Serial Number: SPDK00000000000001 00:22:59.199 Model Number: SPDK bdev Controller 00:22:59.199 Firmware Version: 24.09 00:22:59.199 Recommended Arb Burst: 6 00:22:59.199 IEEE OUI Identifier: e4 d2 5c 00:22:59.199 Multi-path I/O 00:22:59.199 May have multiple subsystem ports: Yes 00:22:59.199 May have multiple controllers: Yes 00:22:59.199 Associated with SR-IOV VF: No 00:22:59.199 Max Data Transfer Size: 131072 00:22:59.199 Max Number of Namespaces: 32 00:22:59.199 Max Number of I/O Queues: 127 00:22:59.199 NVMe Specification Version (VS): 1.3 00:22:59.199 NVMe Specification Version (Identify): 1.3 00:22:59.199 Maximum Queue Entries: 128 00:22:59.199 Contiguous Queues Required: Yes 00:22:59.199 Arbitration Mechanisms Supported 00:22:59.199 Weighted Round Robin: Not Supported 00:22:59.199 Vendor Specific: Not Supported 00:22:59.199 Reset Timeout: 15000 ms 00:22:59.199 Doorbell Stride: 4 bytes 00:22:59.199 NVM Subsystem Reset: Not Supported 00:22:59.199 Command Sets Supported 00:22:59.200 NVM Command Set: Supported 00:22:59.200 Boot Partition: Not Supported 00:22:59.200 Memory Page Size Minimum: 4096 bytes 00:22:59.200 Memory Page Size Maximum: 4096 bytes 00:22:59.200 Persistent Memory Region: Not Supported 00:22:59.200 Optional Asynchronous Events Supported 00:22:59.200 Namespace Attribute Notices: Supported 00:22:59.200 Firmware Activation Notices: Not Supported 00:22:59.200 ANA Change Notices: Not Supported 00:22:59.200 PLE Aggregate Log Change Notices: Not Supported 00:22:59.200 LBA Status Info Alert Notices: Not Supported 00:22:59.200 EGE Aggregate Log Change Notices: Not Supported 00:22:59.200 Normal NVM Subsystem Shutdown event: Not Supported 00:22:59.200 Zone Descriptor Change Notices: Not Supported 00:22:59.200 Discovery Log Change Notices: Not Supported 00:22:59.200 Controller Attributes 00:22:59.200 128-bit Host Identifier: Supported 00:22:59.200 Non-Operational Permissive Mode: Not Supported 00:22:59.200 NVM Sets: Not Supported 00:22:59.200 Read Recovery Levels: Not Supported 00:22:59.200 Endurance Groups: Not Supported 00:22:59.200 Predictable Latency Mode: Not Supported 00:22:59.200 Traffic Based Keep ALive: Not Supported 00:22:59.200 Namespace Granularity: Not Supported 00:22:59.200 SQ Associations: Not Supported 00:22:59.200 UUID List: Not Supported 00:22:59.200 Multi-Domain Subsystem: Not Supported 00:22:59.200 Fixed Capacity Management: Not Supported 00:22:59.200 Variable Capacity Management: Not Supported 00:22:59.200 Delete Endurance Group: Not Supported 00:22:59.200 Delete NVM Set: Not Supported 00:22:59.200 Extended LBA Formats Supported: Not Supported 00:22:59.200 Flexible Data Placement Supported: Not Supported 00:22:59.200 00:22:59.200 Controller Memory Buffer Support 00:22:59.200 ================================ 00:22:59.200 Supported: No 00:22:59.200 00:22:59.200 Persistent Memory Region Support 00:22:59.200 ================================ 00:22:59.200 Supported: No 00:22:59.200 00:22:59.200 Admin Command Set Attributes 00:22:59.200 ============================ 00:22:59.200 Security Send/Receive: Not Supported 00:22:59.200 Format NVM: Not Supported 00:22:59.200 Firmware Activate/Download: Not Supported 00:22:59.200 Namespace Management: Not Supported 00:22:59.200 Device Self-Test: Not Supported 00:22:59.200 Directives: Not Supported 00:22:59.200 NVMe-MI: Not Supported 00:22:59.200 Virtualization Management: Not Supported 00:22:59.200 Doorbell Buffer Config: Not Supported 00:22:59.200 Get LBA Status Capability: Not Supported 00:22:59.200 Command & Feature Lockdown Capability: Not Supported 00:22:59.200 Abort Command Limit: 4 00:22:59.200 Async Event Request Limit: 4 00:22:59.200 Number of Firmware Slots: N/A 00:22:59.200 Firmware Slot 1 Read-Only: N/A 00:22:59.200 Firmware Activation Without Reset: N/A 00:22:59.200 Multiple Update Detection Support: N/A 00:22:59.200 Firmware Update Granularity: No Information Provided 00:22:59.200 Per-Namespace SMART Log: No 00:22:59.200 Asymmetric Namespace Access Log Page: Not Supported 00:22:59.200 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:59.200 Command Effects Log Page: Supported 00:22:59.200 Get Log Page Extended Data: Supported 00:22:59.200 Telemetry Log Pages: Not Supported 00:22:59.200 Persistent Event Log Pages: Not Supported 00:22:59.200 Supported Log Pages Log Page: May Support 00:22:59.200 Commands Supported & Effects Log Page: Not Supported 00:22:59.200 Feature Identifiers & Effects Log Page:May Support 00:22:59.200 NVMe-MI Commands & Effects Log Page: May Support 00:22:59.200 Data Area 4 for Telemetry Log: Not Supported 00:22:59.200 Error Log Page Entries Supported: 128 00:22:59.200 Keep Alive: Supported 00:22:59.200 Keep Alive Granularity: 10000 ms 00:22:59.200 00:22:59.200 NVM Command Set Attributes 00:22:59.200 ========================== 00:22:59.200 Submission Queue Entry Size 00:22:59.200 Max: 64 00:22:59.200 Min: 64 00:22:59.200 Completion Queue Entry Size 00:22:59.200 Max: 16 00:22:59.200 Min: 16 00:22:59.200 Number of Namespaces: 32 00:22:59.200 Compare Command: Supported 00:22:59.200 Write Uncorrectable Command: Not Supported 00:22:59.200 Dataset Management Command: Supported 00:22:59.200 Write Zeroes Command: Supported 00:22:59.200 Set Features Save Field: Not Supported 00:22:59.200 Reservations: Supported 00:22:59.200 Timestamp: Not Supported 00:22:59.200 Copy: Supported 00:22:59.200 Volatile Write Cache: Present 00:22:59.200 Atomic Write Unit (Normal): 1 00:22:59.200 Atomic Write Unit (PFail): 1 00:22:59.200 Atomic Compare & Write Unit: 1 00:22:59.200 Fused Compare & Write: Supported 00:22:59.200 Scatter-Gather List 00:22:59.200 SGL Command Set: Supported 00:22:59.200 SGL Keyed: Supported 00:22:59.200 SGL Bit Bucket Descriptor: Not Supported 00:22:59.200 SGL Metadata Pointer: Not Supported 00:22:59.200 Oversized SGL: Not Supported 00:22:59.200 SGL Metadata Address: Not Supported 00:22:59.200 SGL Offset: Supported 00:22:59.200 Transport SGL Data Block: Not Supported 00:22:59.200 Replay Protected Memory Block: Not Supported 00:22:59.200 00:22:59.200 Firmware Slot Information 00:22:59.200 ========================= 00:22:59.200 Active slot: 1 00:22:59.200 Slot 1 Firmware Revision: 24.09 00:22:59.200 00:22:59.200 00:22:59.200 Commands Supported and Effects 00:22:59.200 ============================== 00:22:59.200 Admin Commands 00:22:59.200 -------------- 00:22:59.200 Get Log Page (02h): Supported 00:22:59.200 Identify (06h): Supported 00:22:59.200 Abort (08h): Supported 00:22:59.200 Set Features (09h): Supported 00:22:59.200 Get Features (0Ah): Supported 00:22:59.200 Asynchronous Event Request (0Ch): Supported 00:22:59.200 Keep Alive (18h): Supported 00:22:59.200 I/O Commands 00:22:59.200 ------------ 00:22:59.200 Flush (00h): Supported LBA-Change 00:22:59.200 Write (01h): Supported LBA-Change 00:22:59.200 Read (02h): Supported 00:22:59.200 Compare (05h): Supported 00:22:59.200 Write Zeroes (08h): Supported LBA-Change 00:22:59.200 Dataset Management (09h): Supported LBA-Change 00:22:59.200 Copy (19h): Supported LBA-Change 00:22:59.200 00:22:59.200 Error Log 00:22:59.200 ========= 00:22:59.200 00:22:59.200 Arbitration 00:22:59.200 =========== 00:22:59.200 Arbitration Burst: 1 00:22:59.200 00:22:59.200 Power Management 00:22:59.200 ================ 00:22:59.200 Number of Power States: 1 00:22:59.200 Current Power State: Power State #0 00:22:59.200 Power State #0: 00:22:59.200 Max Power: 0.00 W 00:22:59.200 Non-Operational State: Operational 00:22:59.200 Entry Latency: Not Reported 00:22:59.200 Exit Latency: Not Reported 00:22:59.200 Relative Read Throughput: 0 00:22:59.200 Relative Read Latency: 0 00:22:59.200 Relative Write Throughput: 0 00:22:59.200 Relative Write Latency: 0 00:22:59.200 Idle Power: Not Reported 00:22:59.200 Active Power: Not Reported 00:22:59.200 Non-Operational Permissive Mode: Not Supported 00:22:59.200 00:22:59.200 Health Information 00:22:59.200 ================== 00:22:59.200 Critical Warnings: 00:22:59.200 Available Spare Space: OK 00:22:59.200 Temperature: OK 00:22:59.200 Device Reliability: OK 00:22:59.200 Read Only: No 00:22:59.200 Volatile Memory Backup: OK 00:22:59.200 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:59.200 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:59.200 Available Spare: 0% 00:22:59.200 Available Spare Threshold: 0% 00:22:59.200 Life Percentage Used:[2024-07-12 14:27:51.197534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.200 [2024-07-12 14:27:51.197539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x88dec0) 00:22:59.200 [2024-07-12 14:27:51.197545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.200 [2024-07-12 14:27:51.197556] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9118c0, cid 7, qid 0 00:22:59.200 [2024-07-12 14:27:51.197632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.200 [2024-07-12 14:27:51.197638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.200 [2024-07-12 14:27:51.197641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.200 [2024-07-12 14:27:51.197644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9118c0) on tqpair=0x88dec0 00:22:59.200 [2024-07-12 14:27:51.197672] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:59.200 [2024-07-12 14:27:51.197682] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x910e40) on tqpair=0x88dec0 00:22:59.200 [2024-07-12 14:27:51.197687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.200 [2024-07-12 14:27:51.197691] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x910fc0) on tqpair=0x88dec0 00:22:59.200 [2024-07-12 14:27:51.197695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.200 [2024-07-12 14:27:51.197699] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x911140) on tqpair=0x88dec0 00:22:59.200 [2024-07-12 14:27:51.197703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.200 [2024-07-12 14:27:51.197707] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.200 [2024-07-12 14:27:51.197711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.200 [2024-07-12 14:27:51.197718] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.200 [2024-07-12 14:27:51.197721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.201 [2024-07-12 14:27:51.197724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.201 [2024-07-12 14:27:51.197730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.201 [2024-07-12 14:27:51.197740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.201 [2024-07-12 14:27:51.197808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.201 [2024-07-12 14:27:51.197813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.201 [2024-07-12 14:27:51.197816] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.201 [2024-07-12 14:27:51.197819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.201 [2024-07-12 14:27:51.197825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.201 [2024-07-12 14:27:51.197828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.201 [2024-07-12 14:27:51.197832] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.201 [2024-07-12 14:27:51.197837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.201 [2024-07-12 14:27:51.197849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.201 [2024-07-12 14:27:51.197930] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.201 [2024-07-12 14:27:51.197937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.201 [2024-07-12 14:27:51.197940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.201 [2024-07-12 14:27:51.197943] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.201 [2024-07-12 14:27:51.197947] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:59.201 [2024-07-12 14:27:51.197951] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:59.201 [2024-07-12 14:27:51.197959] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.201 [2024-07-12 14:27:51.197962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.201 [2024-07-12 14:27:51.197965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.201 [2024-07-12 14:27:51.197971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.201 [2024-07-12 14:27:51.197980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.201 [2024-07-12 14:27:51.198046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.201 [2024-07-12 14:27:51.198052] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.201 [2024-07-12 14:27:51.198055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.201 [2024-07-12 14:27:51.198058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.201 [2024-07-12 14:27:51.198066] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.201 [2024-07-12 14:27:51.198069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.201 [2024-07-12 14:27:51.198072] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.201 [2024-07-12 14:27:51.198078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.201 [2024-07-12 14:27:51.198087] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.462 [2024-07-12 14:27:51.198164] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.462 [2024-07-12 14:27:51.198171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.462 [2024-07-12 14:27:51.198174] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.462 [2024-07-12 14:27:51.198186] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198197] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.462 [2024-07-12 14:27:51.198202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.462 [2024-07-12 14:27:51.198212] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.462 [2024-07-12 14:27:51.198278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.462 [2024-07-12 14:27:51.198285] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.462 [2024-07-12 14:27:51.198288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.462 [2024-07-12 14:27:51.198300] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198303] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.462 [2024-07-12 14:27:51.198312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.462 [2024-07-12 14:27:51.198322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.462 [2024-07-12 14:27:51.198394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.462 [2024-07-12 14:27:51.198400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.462 [2024-07-12 14:27:51.198403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.462 [2024-07-12 14:27:51.198414] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.462 [2024-07-12 14:27:51.198427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.462 [2024-07-12 14:27:51.198436] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.462 [2024-07-12 14:27:51.198502] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.462 [2024-07-12 14:27:51.198508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.462 [2024-07-12 14:27:51.198511] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198514] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.462 [2024-07-12 14:27:51.198521] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.462 [2024-07-12 14:27:51.198533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.462 [2024-07-12 14:27:51.198543] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.462 [2024-07-12 14:27:51.198620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.462 [2024-07-12 14:27:51.198625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.462 [2024-07-12 14:27:51.198628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.462 [2024-07-12 14:27:51.198639] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.462 [2024-07-12 14:27:51.198651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.462 [2024-07-12 14:27:51.198661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.462 [2024-07-12 14:27:51.198727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.462 [2024-07-12 14:27:51.198732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.462 [2024-07-12 14:27:51.198735] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198738] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.462 [2024-07-12 14:27:51.198746] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198750] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.462 [2024-07-12 14:27:51.198758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.462 [2024-07-12 14:27:51.198767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.462 [2024-07-12 14:27:51.198843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.462 [2024-07-12 14:27:51.198849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.462 [2024-07-12 14:27:51.198851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.462 [2024-07-12 14:27:51.198864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198867] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.462 [2024-07-12 14:27:51.198870] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.462 [2024-07-12 14:27:51.198876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.462 [2024-07-12 14:27:51.198885] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.462 [2024-07-12 14:27:51.198954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.463 [2024-07-12 14:27:51.198959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.463 [2024-07-12 14:27:51.198962] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.198965] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.463 [2024-07-12 14:27:51.198973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.198976] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.198979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.463 [2024-07-12 14:27:51.198985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.463 [2024-07-12 14:27:51.198994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.463 [2024-07-12 14:27:51.199059] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.463 [2024-07-12 14:27:51.199064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.463 [2024-07-12 14:27:51.199067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.463 [2024-07-12 14:27:51.199078] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.463 [2024-07-12 14:27:51.199090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.463 [2024-07-12 14:27:51.199099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.463 [2024-07-12 14:27:51.199176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.463 [2024-07-12 14:27:51.199181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.463 [2024-07-12 14:27:51.199184] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199187] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.463 [2024-07-12 14:27:51.199196] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.463 [2024-07-12 14:27:51.199208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.463 [2024-07-12 14:27:51.199217] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.463 [2024-07-12 14:27:51.199278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.463 [2024-07-12 14:27:51.199283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.463 [2024-07-12 14:27:51.199288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.463 [2024-07-12 14:27:51.199299] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.463 [2024-07-12 14:27:51.199311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.463 [2024-07-12 14:27:51.199320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.463 [2024-07-12 14:27:51.199397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.463 [2024-07-12 14:27:51.199403] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.463 [2024-07-12 14:27:51.199406] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.463 [2024-07-12 14:27:51.199418] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.463 [2024-07-12 14:27:51.199430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.463 [2024-07-12 14:27:51.199440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.463 [2024-07-12 14:27:51.199508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.463 [2024-07-12 14:27:51.199513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.463 [2024-07-12 14:27:51.199516] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199519] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.463 [2024-07-12 14:27:51.199527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199533] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.463 [2024-07-12 14:27:51.199539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.463 [2024-07-12 14:27:51.199547] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.463 [2024-07-12 14:27:51.199614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.463 [2024-07-12 14:27:51.199620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.463 [2024-07-12 14:27:51.199623] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199626] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.463 [2024-07-12 14:27:51.199634] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199637] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.463 [2024-07-12 14:27:51.199646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.463 [2024-07-12 14:27:51.199655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.463 [2024-07-12 14:27:51.199731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.463 [2024-07-12 14:27:51.199736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.463 [2024-07-12 14:27:51.199739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.463 [2024-07-12 14:27:51.199753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199756] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199759] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.463 [2024-07-12 14:27:51.199765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.463 [2024-07-12 14:27:51.199774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.463 [2024-07-12 14:27:51.199840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.463 [2024-07-12 14:27:51.199846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.463 [2024-07-12 14:27:51.199849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.463 [2024-07-12 14:27:51.199859] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.463 [2024-07-12 14:27:51.199872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.463 [2024-07-12 14:27:51.199880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.463 [2024-07-12 14:27:51.199943] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.463 [2024-07-12 14:27:51.199948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.463 [2024-07-12 14:27:51.199951] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.463 [2024-07-12 14:27:51.199962] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.199969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.463 [2024-07-12 14:27:51.199974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.463 [2024-07-12 14:27:51.199983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.463 [2024-07-12 14:27:51.200051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.463 [2024-07-12 14:27:51.200056] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.463 [2024-07-12 14:27:51.200059] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.200062] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.463 [2024-07-12 14:27:51.200070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.200074] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.200077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.463 [2024-07-12 14:27:51.200082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.463 [2024-07-12 14:27:51.200091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.463 [2024-07-12 14:27:51.200156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.463 [2024-07-12 14:27:51.200162] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.463 [2024-07-12 14:27:51.200165] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.200168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.463 [2024-07-12 14:27:51.200177] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.200180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.463 [2024-07-12 14:27:51.200183] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.463 [2024-07-12 14:27:51.200189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.463 [2024-07-12 14:27:51.200198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.463 [2024-07-12 14:27:51.200269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.463 [2024-07-12 14:27:51.200275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.464 [2024-07-12 14:27:51.200278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.464 [2024-07-12 14:27:51.200281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.464 [2024-07-12 14:27:51.200289] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.464 [2024-07-12 14:27:51.200292] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.464 [2024-07-12 14:27:51.200295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.464 [2024-07-12 14:27:51.200300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.464 [2024-07-12 14:27:51.200310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.464 [2024-07-12 14:27:51.204384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.464 [2024-07-12 14:27:51.204392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.464 [2024-07-12 14:27:51.204394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.464 [2024-07-12 14:27:51.204398] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.464 [2024-07-12 14:27:51.204408] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:59.464 [2024-07-12 14:27:51.204411] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:59.464 [2024-07-12 14:27:51.204414] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x88dec0) 00:22:59.464 [2024-07-12 14:27:51.204420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.464 [2024-07-12 14:27:51.204431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9112c0, cid 3, qid 0 00:22:59.464 [2024-07-12 14:27:51.204585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:59.464 [2024-07-12 14:27:51.204590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:59.464 [2024-07-12 14:27:51.204593] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:59.464 [2024-07-12 14:27:51.204596] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9112c0) on tqpair=0x88dec0 00:22:59.464 [2024-07-12 14:27:51.204602] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:59.464 0% 00:22:59.464 Data Units Read: 0 00:22:59.464 Data Units Written: 0 00:22:59.464 Host Read Commands: 0 00:22:59.464 Host Write Commands: 0 00:22:59.464 Controller Busy Time: 0 minutes 00:22:59.464 Power Cycles: 0 00:22:59.464 Power On Hours: 0 hours 00:22:59.464 Unsafe Shutdowns: 0 00:22:59.464 Unrecoverable Media Errors: 0 00:22:59.464 Lifetime Error Log Entries: 0 00:22:59.464 Warning Temperature Time: 0 minutes 00:22:59.464 Critical Temperature Time: 0 minutes 00:22:59.464 00:22:59.464 Number of Queues 00:22:59.464 ================ 00:22:59.464 Number of I/O Submission Queues: 127 00:22:59.464 Number of I/O Completion Queues: 127 00:22:59.464 00:22:59.464 Active Namespaces 00:22:59.464 ================= 00:22:59.464 Namespace ID:1 00:22:59.464 Error Recovery Timeout: Unlimited 00:22:59.464 Command Set Identifier: NVM (00h) 00:22:59.464 Deallocate: Supported 00:22:59.464 Deallocated/Unwritten Error: Not Supported 00:22:59.464 Deallocated Read Value: Unknown 00:22:59.464 Deallocate in Write Zeroes: Not Supported 00:22:59.464 Deallocated Guard Field: 0xFFFF 00:22:59.464 Flush: Supported 00:22:59.464 Reservation: Supported 00:22:59.464 Namespace Sharing Capabilities: Multiple Controllers 00:22:59.464 Size (in LBAs): 131072 (0GiB) 00:22:59.464 Capacity (in LBAs): 131072 (0GiB) 00:22:59.464 Utilization (in LBAs): 131072 (0GiB) 00:22:59.464 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:59.464 EUI64: ABCDEF0123456789 00:22:59.464 UUID: 63751ce3-3dff-40b7-8f28-da0184634ab5 00:22:59.464 Thin Provisioning: Not Supported 00:22:59.464 Per-NS Atomic Units: Yes 00:22:59.464 Atomic Boundary Size (Normal): 0 00:22:59.464 Atomic Boundary Size (PFail): 0 00:22:59.464 Atomic Boundary Offset: 0 00:22:59.464 Maximum Single Source Range Length: 65535 00:22:59.464 Maximum Copy Length: 65535 00:22:59.464 Maximum Source Range Count: 1 00:22:59.464 NGUID/EUI64 Never Reused: No 00:22:59.464 Namespace Write Protected: No 00:22:59.464 Number of LBA Formats: 1 00:22:59.464 Current LBA Format: LBA Format #00 00:22:59.464 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:59.464 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:59.464 rmmod nvme_tcp 00:22:59.464 rmmod nvme_fabrics 00:22:59.464 rmmod nvme_keyring 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2626463 ']' 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2626463 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2626463 ']' 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2626463 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2626463 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2626463' 00:22:59.464 killing process with pid 2626463 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2626463 00:22:59.464 14:27:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2626463 00:22:59.723 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:59.723 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:59.723 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:59.723 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:59.723 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:59.723 14:27:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.723 14:27:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.723 14:27:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.679 14:27:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:01.679 00:23:01.679 real 0m9.106s 00:23:01.679 user 0m7.755s 00:23:01.679 sys 0m4.281s 00:23:01.679 14:27:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:01.679 14:27:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:01.679 ************************************ 00:23:01.679 END TEST nvmf_identify 00:23:01.679 ************************************ 00:23:01.679 14:27:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:01.679 14:27:53 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:01.679 14:27:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:01.679 14:27:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:01.679 14:27:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:01.939 ************************************ 00:23:01.939 START TEST nvmf_perf 00:23:01.939 ************************************ 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:01.939 * Looking for test storage... 00:23:01.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:01.939 14:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:07.217 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.217 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:07.218 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:07.218 Found net devices under 0000:86:00.0: cvl_0_0 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:07.218 Found net devices under 0000:86:00.1: cvl_0_1 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:07.218 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:07.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:23:07.478 00:23:07.478 --- 10.0.0.2 ping statistics --- 00:23:07.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.478 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:23:07.478 00:23:07.478 --- 10.0.0.1 ping statistics --- 00:23:07.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.478 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2630222 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2630222 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2630222 ']' 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.478 14:27:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:07.478 [2024-07-12 14:27:59.345676] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:23:07.478 [2024-07-12 14:27:59.345716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.478 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.478 [2024-07-12 14:27:59.402330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.478 [2024-07-12 14:27:59.482011] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.478 [2024-07-12 14:27:59.482049] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.478 [2024-07-12 14:27:59.482056] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.478 [2024-07-12 14:27:59.482062] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.478 [2024-07-12 14:27:59.482067] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.478 [2024-07-12 14:27:59.482103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.478 [2024-07-12 14:27:59.482203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.478 [2024-07-12 14:27:59.482289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.478 [2024-07-12 14:27:59.482290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.416 14:28:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.416 14:28:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:23:08.416 14:28:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:08.416 14:28:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:08.416 14:28:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:08.416 14:28:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.416 14:28:00 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:08.416 14:28:00 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:11.704 14:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:11.704 14:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:11.704 14:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:11.704 14:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:11.704 14:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:11.704 14:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:11.704 14:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:11.704 14:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:11.704 14:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:11.963 [2024-07-12 14:28:03.760144] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.963 14:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:12.222 14:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:12.222 14:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:12.222 14:28:04 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:12.222 14:28:04 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:12.482 14:28:04 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:12.741 [2024-07-12 14:28:04.502907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.741 14:28:04 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:12.741 14:28:04 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:12.741 14:28:04 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:12.741 14:28:04 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:12.741 14:28:04 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:14.118 Initializing NVMe Controllers 00:23:14.118 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:14.118 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:14.118 Initialization complete. Launching workers. 00:23:14.118 ======================================================== 00:23:14.118 Latency(us) 00:23:14.118 Device Information : IOPS MiB/s Average min max 00:23:14.118 PCIE (0000:5e:00.0) NSID 1 from core 0: 97790.03 381.99 326.84 31.01 7220.20 00:23:14.118 ======================================================== 00:23:14.118 Total : 97790.03 381.99 326.84 31.01 7220.20 00:23:14.118 00:23:14.118 14:28:05 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:14.118 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.497 Initializing NVMe Controllers 00:23:15.497 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:15.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:15.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:15.497 Initialization complete. Launching workers. 00:23:15.497 ======================================================== 00:23:15.497 Latency(us) 00:23:15.497 Device Information : IOPS MiB/s Average min max 00:23:15.497 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.00 0.31 12780.81 115.11 44692.51 00:23:15.497 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 21849.01 5540.71 48155.94 00:23:15.497 ======================================================== 00:23:15.497 Total : 126.00 0.49 16091.43 115.11 48155.94 00:23:15.497 00:23:15.497 14:28:07 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:15.497 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.436 Initializing NVMe Controllers 00:23:16.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:16.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:16.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:16.436 Initialization complete. Launching workers. 00:23:16.436 ======================================================== 00:23:16.436 Latency(us) 00:23:16.436 Device Information : IOPS MiB/s Average min max 00:23:16.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11046.76 43.15 2897.64 423.39 6256.25 00:23:16.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3907.91 15.27 8244.68 7153.63 15803.78 00:23:16.436 ======================================================== 00:23:16.436 Total : 14954.67 58.42 4294.91 423.39 15803.78 00:23:16.436 00:23:16.436 14:28:08 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:16.436 14:28:08 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:16.436 14:28:08 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:16.695 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.228 Initializing NVMe Controllers 00:23:19.228 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:19.228 Controller IO queue size 128, less than required. 00:23:19.228 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:19.228 Controller IO queue size 128, less than required. 00:23:19.228 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:19.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:19.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:19.228 Initialization complete. Launching workers. 00:23:19.228 ======================================================== 00:23:19.228 Latency(us) 00:23:19.228 Device Information : IOPS MiB/s Average min max 00:23:19.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1881.92 470.48 68770.61 43261.09 96581.13 00:23:19.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 567.37 141.84 231851.66 85801.31 334066.06 00:23:19.228 ======================================================== 00:23:19.228 Total : 2449.30 612.32 106547.95 43261.09 334066.06 00:23:19.228 00:23:19.228 14:28:10 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:19.228 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.228 No valid NVMe controllers or AIO or URING devices found 00:23:19.228 Initializing NVMe Controllers 00:23:19.228 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:19.228 Controller IO queue size 128, less than required. 00:23:19.228 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:19.228 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:19.228 Controller IO queue size 128, less than required. 00:23:19.228 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:19.228 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:19.228 WARNING: Some requested NVMe devices were skipped 00:23:19.228 14:28:10 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:19.228 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.765 Initializing NVMe Controllers 00:23:21.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:21.765 Controller IO queue size 128, less than required. 00:23:21.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.765 Controller IO queue size 128, less than required. 00:23:21.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:21.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:21.765 Initialization complete. Launching workers. 00:23:21.765 00:23:21.765 ==================== 00:23:21.765 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:21.765 TCP transport: 00:23:21.765 polls: 35184 00:23:21.765 idle_polls: 14551 00:23:21.765 sock_completions: 20633 00:23:21.765 nvme_completions: 5265 00:23:21.765 submitted_requests: 7826 00:23:21.765 queued_requests: 1 00:23:21.765 00:23:21.765 ==================== 00:23:21.765 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:21.765 TCP transport: 00:23:21.765 polls: 35828 00:23:21.765 idle_polls: 14971 00:23:21.765 sock_completions: 20857 00:23:21.765 nvme_completions: 5329 00:23:21.765 submitted_requests: 7986 00:23:21.765 queued_requests: 1 00:23:21.765 ======================================================== 00:23:21.765 Latency(us) 00:23:21.765 Device Information : IOPS MiB/s Average min max 00:23:21.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1316.00 329.00 98923.96 61851.75 149343.27 00:23:21.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1332.00 333.00 98066.37 41650.52 147288.07 00:23:21.765 ======================================================== 00:23:21.765 Total : 2648.00 662.00 98492.57 41650.52 149343.27 00:23:21.765 00:23:21.765 14:28:13 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:21.765 14:28:13 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:21.765 14:28:13 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:21.765 14:28:13 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:21.765 14:28:13 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:21.765 14:28:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:21.765 14:28:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:21.765 14:28:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:21.765 14:28:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:21.765 14:28:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:21.765 14:28:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:21.765 rmmod nvme_tcp 00:23:21.765 rmmod nvme_fabrics 00:23:21.765 rmmod nvme_keyring 00:23:21.765 14:28:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:21.765 14:28:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:21.765 14:28:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:22.025 14:28:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2630222 ']' 00:23:22.025 14:28:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2630222 00:23:22.025 14:28:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2630222 ']' 00:23:22.025 14:28:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2630222 00:23:22.025 14:28:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:23:22.025 14:28:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:22.025 14:28:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2630222 00:23:22.025 14:28:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:22.025 14:28:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:22.025 14:28:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2630222' 00:23:22.025 killing process with pid 2630222 00:23:22.025 14:28:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2630222 00:23:22.025 14:28:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2630222 00:23:23.404 14:28:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:23.404 14:28:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:23.404 14:28:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:23.404 14:28:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:23.404 14:28:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:23.404 14:28:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.404 14:28:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.404 14:28:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.943 14:28:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:25.943 00:23:25.943 real 0m23.666s 00:23:25.943 user 1m3.335s 00:23:25.943 sys 0m7.170s 00:23:25.943 14:28:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:25.943 14:28:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:25.943 ************************************ 00:23:25.943 END TEST nvmf_perf 00:23:25.943 ************************************ 00:23:25.943 14:28:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:25.943 14:28:17 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:25.943 14:28:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:25.943 14:28:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.943 14:28:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.943 ************************************ 00:23:25.943 START TEST nvmf_fio_host 00:23:25.943 ************************************ 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:25.943 * Looking for test storage... 00:23:25.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:25.943 14:28:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:31.257 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:31.257 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:31.257 Found net devices under 0000:86:00.0: cvl_0_0 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:31.257 Found net devices under 0000:86:00.1: cvl_0_1 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:31.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:23:31.257 00:23:31.257 --- 10.0.0.2 ping statistics --- 00:23:31.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.257 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:31.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:23:31.257 00:23:31.257 --- 10.0.0.1 ping statistics --- 00:23:31.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.257 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2636108 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2636108 00:23:31.257 14:28:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2636108 ']' 00:23:31.258 14:28:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.258 14:28:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:31.258 14:28:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.258 14:28:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:31.258 14:28:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.258 14:28:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:31.258 [2024-07-12 14:28:22.929036] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:23:31.258 [2024-07-12 14:28:22.929081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.258 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.258 [2024-07-12 14:28:22.985903] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:31.258 [2024-07-12 14:28:23.067053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.258 [2024-07-12 14:28:23.067091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.258 [2024-07-12 14:28:23.067101] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.258 [2024-07-12 14:28:23.067107] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.258 [2024-07-12 14:28:23.067112] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.258 [2024-07-12 14:28:23.067146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.258 [2024-07-12 14:28:23.067163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.258 [2024-07-12 14:28:23.067252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:31.258 [2024-07-12 14:28:23.067253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.825 14:28:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:31.825 14:28:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:23:31.825 14:28:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:32.083 [2024-07-12 14:28:23.894793] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.083 14:28:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:32.083 14:28:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:32.083 14:28:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.083 14:28:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:32.342 Malloc1 00:23:32.342 14:28:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:32.342 14:28:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:32.601 14:28:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.859 [2024-07-12 14:28:24.689072] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.859 14:28:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:33.118 14:28:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:33.376 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:33.376 fio-3.35 00:23:33.376 Starting 1 thread 00:23:33.376 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.917 00:23:35.917 test: (groupid=0, jobs=1): err= 0: pid=2636700: Fri Jul 12 14:28:27 2024 00:23:35.917 read: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(91.4MiB/2005msec) 00:23:35.917 slat (nsec): min=1599, max=206977, avg=1739.81, stdev=1859.08 00:23:35.917 clat (usec): min=2507, max=10613, avg=6063.06, stdev=443.92 00:23:35.917 lat (usec): min=2534, max=10615, avg=6064.80, stdev=443.78 00:23:35.917 clat percentiles (usec): 00:23:35.917 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:23:35.917 | 30.00th=[ 5866], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6194], 00:23:35.917 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6718], 00:23:35.917 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 8455], 99.95th=[ 9765], 00:23:35.917 | 99.99th=[10552] 00:23:35.917 bw ( KiB/s): min=45424, max=47488, per=99.95%, avg=46644.00, stdev=871.02, samples=4 00:23:35.917 iops : min=11356, max=11872, avg=11661.00, stdev=217.76, samples=4 00:23:35.917 write: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(90.7MiB/2005msec); 0 zone resets 00:23:35.917 slat (nsec): min=1652, max=179684, avg=1808.08, stdev=1302.14 00:23:35.917 clat (usec): min=1914, max=9292, avg=4869.23, stdev=372.66 00:23:35.917 lat (usec): min=1926, max=9293, avg=4871.04, stdev=372.58 00:23:35.917 clat percentiles (usec): 00:23:35.917 | 1.00th=[ 4015], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4621], 00:23:35.917 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4883], 60.00th=[ 4948], 00:23:35.917 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:23:35.917 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 7046], 99.95th=[ 8455], 00:23:35.917 | 99.99th=[ 9241] 00:23:35.917 bw ( KiB/s): min=45776, max=46912, per=100.00%, avg=46340.00, stdev=466.73, samples=4 00:23:35.917 iops : min=11444, max=11728, avg=11585.00, stdev=116.68, samples=4 00:23:35.917 lat (msec) : 2=0.01%, 4=0.48%, 10=99.50%, 20=0.01% 00:23:35.917 cpu : usr=74.45%, sys=23.85%, ctx=83, majf=0, minf=6 00:23:35.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:35.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:35.917 issued rwts: total=23391,23228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.917 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:35.917 00:23:35.917 Run status group 0 (all jobs): 00:23:35.917 READ: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=91.4MiB (95.8MB), run=2005-2005msec 00:23:35.917 WRITE: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=90.7MiB (95.1MB), run=2005-2005msec 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:35.917 14:28:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:36.175 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:36.175 fio-3.35 00:23:36.175 Starting 1 thread 00:23:36.175 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.708 00:23:38.708 test: (groupid=0, jobs=1): err= 0: pid=2637272: Fri Jul 12 14:28:30 2024 00:23:38.708 read: IOPS=10.8k, BW=169MiB/s (178MB/s)(339MiB/2003msec) 00:23:38.708 slat (nsec): min=2618, max=84396, avg=2845.44, stdev=1209.95 00:23:38.708 clat (usec): min=1927, max=12809, avg=6798.58, stdev=1584.06 00:23:38.708 lat (usec): min=1930, max=12812, avg=6801.42, stdev=1584.14 00:23:38.708 clat percentiles (usec): 00:23:38.708 | 1.00th=[ 3720], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5342], 00:23:38.708 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6783], 60.00th=[ 7308], 00:23:38.708 | 70.00th=[ 7701], 80.00th=[ 8094], 90.00th=[ 8848], 95.00th=[ 9372], 00:23:38.708 | 99.00th=[10552], 99.50th=[11076], 99.90th=[12387], 99.95th=[12518], 00:23:38.708 | 99.99th=[12780] 00:23:38.708 bw ( KiB/s): min=82784, max=97280, per=50.93%, avg=88320.00, stdev=6739.43, samples=4 00:23:38.708 iops : min= 5174, max= 6080, avg=5520.00, stdev=421.21, samples=4 00:23:38.708 write: IOPS=6401, BW=100MiB/s (105MB/s)(181MiB/1806msec); 0 zone resets 00:23:38.708 slat (usec): min=29, max=255, avg=31.89, stdev= 5.72 00:23:38.708 clat (usec): min=3022, max=14141, avg=8650.35, stdev=1513.26 00:23:38.708 lat (usec): min=3053, max=14173, avg=8682.24, stdev=1513.98 00:23:38.708 clat percentiles (usec): 00:23:38.708 | 1.00th=[ 5669], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 7373], 00:23:38.708 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:23:38.708 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10945], 95.00th=[11469], 00:23:38.708 | 99.00th=[12518], 99.50th=[12911], 99.90th=[13566], 99.95th=[13960], 00:23:38.708 | 99.99th=[14091] 00:23:38.708 bw ( KiB/s): min=86176, max=101376, per=89.82%, avg=92008.00, stdev=7280.28, samples=4 00:23:38.708 iops : min= 5386, max= 6336, avg=5750.50, stdev=455.02, samples=4 00:23:38.708 lat (msec) : 2=0.01%, 4=1.59%, 10=90.48%, 20=7.92% 00:23:38.708 cpu : usr=85.92%, sys=13.23%, ctx=38, majf=0, minf=3 00:23:38.708 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:38.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:38.708 issued rwts: total=21709,11562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.708 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:38.708 00:23:38.708 Run status group 0 (all jobs): 00:23:38.708 READ: bw=169MiB/s (178MB/s), 169MiB/s-169MiB/s (178MB/s-178MB/s), io=339MiB (356MB), run=2003-2003msec 00:23:38.708 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=181MiB (189MB), run=1806-1806msec 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:38.708 rmmod nvme_tcp 00:23:38.708 rmmod nvme_fabrics 00:23:38.708 rmmod nvme_keyring 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2636108 ']' 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2636108 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2636108 ']' 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2636108 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2636108 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2636108' 00:23:38.708 killing process with pid 2636108 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2636108 00:23:38.708 14:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2636108 00:23:38.967 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:38.967 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:38.967 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:38.967 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:38.967 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:38.967 14:28:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.967 14:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.967 14:28:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.872 14:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:40.872 00:23:40.872 real 0m15.416s 00:23:40.872 user 0m47.316s 00:23:40.872 sys 0m5.934s 00:23:40.872 14:28:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:40.872 14:28:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.872 ************************************ 00:23:40.872 END TEST nvmf_fio_host 00:23:40.872 ************************************ 00:23:40.872 14:28:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:40.872 14:28:32 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:40.872 14:28:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:40.872 14:28:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:41.141 14:28:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:41.141 ************************************ 00:23:41.141 START TEST nvmf_failover 00:23:41.141 ************************************ 00:23:41.141 14:28:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:41.141 * Looking for test storage... 00:23:41.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:41.141 14:28:32 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.141 14:28:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.141 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:41.142 14:28:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:46.420 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:46.420 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:46.420 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:46.421 Found net devices under 0000:86:00.0: cvl_0_0 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:46.421 Found net devices under 0000:86:00.1: cvl_0_1 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:46.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:23:46.421 00:23:46.421 --- 10.0.0.2 ping statistics --- 00:23:46.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.421 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:23:46.421 00:23:46.421 --- 10.0.0.1 ping statistics --- 00:23:46.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.421 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2641010 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2641010 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2641010 ']' 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:46.421 14:28:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:46.421 [2024-07-12 14:28:37.995921] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:23:46.421 [2024-07-12 14:28:37.995968] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.421 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.421 [2024-07-12 14:28:38.051929] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:46.421 [2024-07-12 14:28:38.130998] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.421 [2024-07-12 14:28:38.131031] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.421 [2024-07-12 14:28:38.131039] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.421 [2024-07-12 14:28:38.131044] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.421 [2024-07-12 14:28:38.131049] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.421 [2024-07-12 14:28:38.131159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.421 [2024-07-12 14:28:38.131257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.421 [2024-07-12 14:28:38.131255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:46.988 14:28:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:46.988 14:28:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:46.988 14:28:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:46.988 14:28:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:46.988 14:28:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:46.988 14:28:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.988 14:28:38 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:46.988 [2024-07-12 14:28:38.972202] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.247 14:28:39 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:47.247 Malloc0 00:23:47.247 14:28:39 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:47.505 14:28:39 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:47.764 14:28:39 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.764 [2024-07-12 14:28:39.710436] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.764 14:28:39 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:48.023 [2024-07-12 14:28:39.898960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:48.023 14:28:39 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:48.282 [2024-07-12 14:28:40.091643] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:48.282 14:28:40 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2641476 00:23:48.282 14:28:40 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:48.282 14:28:40 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.282 14:28:40 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2641476 /var/tmp/bdevperf.sock 00:23:48.282 14:28:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2641476 ']' 00:23:48.282 14:28:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.282 14:28:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:48.282 14:28:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.282 14:28:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:48.282 14:28:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:49.215 14:28:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:49.215 14:28:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:49.215 14:28:40 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:49.472 NVMe0n1 00:23:49.472 14:28:41 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:49.729 00:23:49.729 14:28:41 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2641726 00:23:49.729 14:28:41 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:49.729 14:28:41 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:50.661 14:28:42 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.919 [2024-07-12 14:28:42.737597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.919 [2024-07-12 14:28:42.737641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.919 [2024-07-12 14:28:42.737649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.919 [2024-07-12 14:28:42.737656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.919 [2024-07-12 14:28:42.737662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.919 [2024-07-12 14:28:42.737669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.919 [2024-07-12 14:28:42.737675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.919 [2024-07-12 14:28:42.737680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.919 [2024-07-12 14:28:42.737687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.919 [2024-07-12 14:28:42.737693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.919 [2024-07-12 14:28:42.737699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.919 [2024-07-12 14:28:42.737705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 [2024-07-12 14:28:42.737871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff080 is same with the state(5) to be set 00:23:50.920 14:28:42 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:54.256 14:28:45 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:54.256 00:23:54.256 14:28:46 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:54.256 [2024-07-12 14:28:46.251591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.256 [2024-07-12 14:28:46.251798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.257 [2024-07-12 14:28:46.251804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fff20 is same with the state(5) to be set 00:23:54.515 14:28:46 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:57.803 14:28:49 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.803 [2024-07-12 14:28:49.452334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.803 14:28:49 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:58.737 14:28:50 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:58.737 [2024-07-12 14:28:50.653477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800aa0 is same with the state(5) to be set 00:23:58.737 [2024-07-12 14:28:50.653531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800aa0 is same with the state(5) to be set 00:23:58.737 [2024-07-12 14:28:50.653539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800aa0 is same with the state(5) to be set 00:23:58.737 [2024-07-12 14:28:50.653545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800aa0 is same with the state(5) to be set 00:23:58.737 [2024-07-12 14:28:50.653551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800aa0 is same with the state(5) to be set 00:23:58.737 [2024-07-12 14:28:50.653557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800aa0 is same with the state(5) to be set 00:23:58.737 [2024-07-12 14:28:50.653564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800aa0 is same with the state(5) to be set 00:23:58.737 [2024-07-12 14:28:50.653570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800aa0 is same with the state(5) to be set 00:23:58.737 [2024-07-12 14:28:50.653575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800aa0 is same with the state(5) to be set 00:23:58.737 14:28:50 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2641726 00:24:05.312 0 00:24:05.312 14:28:56 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2641476 00:24:05.312 14:28:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2641476 ']' 00:24:05.312 14:28:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2641476 00:24:05.312 14:28:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:05.312 14:28:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:05.312 14:28:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2641476 00:24:05.312 14:28:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:05.312 14:28:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:05.312 14:28:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2641476' 00:24:05.312 killing process with pid 2641476 00:24:05.312 14:28:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2641476 00:24:05.312 14:28:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2641476 00:24:05.312 14:28:56 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:05.312 [2024-07-12 14:28:40.167371] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:24:05.312 [2024-07-12 14:28:40.167438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2641476 ] 00:24:05.312 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.312 [2024-07-12 14:28:40.221702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.312 [2024-07-12 14:28:40.297928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.312 Running I/O for 15 seconds... 00:24:05.312 [2024-07-12 14:28:42.739394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.312 [2024-07-12 14:28:42.739431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.312 [2024-07-12 14:28:42.739445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.312 [2024-07-12 14:28:42.739453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.312 [2024-07-12 14:28:42.739462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.312 [2024-07-12 14:28:42.739470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.312 [2024-07-12 14:28:42.739479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.312 [2024-07-12 14:28:42.739486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.312 [2024-07-12 14:28:42.739494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.312 [2024-07-12 14:28:42.739501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.312 [2024-07-12 14:28:42.739510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.739517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.739533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.739547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.739561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.739575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.739590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.739610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.739626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.739641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.739656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.739670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.739685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.739700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.739715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.739730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.739745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.739991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.739998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.740006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.740012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.740020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.740026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.740035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.740041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.740049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.740056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.740064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.740071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.740079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.740085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.740093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.313 [2024-07-12 14:28:42.740100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.740108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.740115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.740124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.740130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.740138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.313 [2024-07-12 14:28:42.740145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.313 [2024-07-12 14:28:42.740153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.314 [2024-07-12 14:28:42.740580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.314 [2024-07-12 14:28:42.740607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97208 len:8 PRP1 0x0 PRP2 0x0 00:24:05.314 [2024-07-12 14:28:42.740614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.314 [2024-07-12 14:28:42.740628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.314 [2024-07-12 14:28:42.740633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97216 len:8 PRP1 0x0 PRP2 0x0 00:24:05.314 [2024-07-12 14:28:42.740640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.314 [2024-07-12 14:28:42.740652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.314 [2024-07-12 14:28:42.740658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97224 len:8 PRP1 0x0 PRP2 0x0 00:24:05.314 [2024-07-12 14:28:42.740664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.314 [2024-07-12 14:28:42.740675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.314 [2024-07-12 14:28:42.740681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97232 len:8 PRP1 0x0 PRP2 0x0 00:24:05.314 [2024-07-12 14:28:42.740687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.314 [2024-07-12 14:28:42.740699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.314 [2024-07-12 14:28:42.740704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97240 len:8 PRP1 0x0 PRP2 0x0 00:24:05.314 [2024-07-12 14:28:42.740711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.314 [2024-07-12 14:28:42.740723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.314 [2024-07-12 14:28:42.740728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97248 len:8 PRP1 0x0 PRP2 0x0 00:24:05.314 [2024-07-12 14:28:42.740734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.314 [2024-07-12 14:28:42.740747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.314 [2024-07-12 14:28:42.740753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97256 len:8 PRP1 0x0 PRP2 0x0 00:24:05.314 [2024-07-12 14:28:42.740760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.314 [2024-07-12 14:28:42.740767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.314 [2024-07-12 14:28:42.740773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.314 [2024-07-12 14:28:42.740779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97264 len:8 PRP1 0x0 PRP2 0x0 00:24:05.314 [2024-07-12 14:28:42.740785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.740792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.740796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.740802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97272 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.740808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.740815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.740820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.740825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.740831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.740838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.740843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.740849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97288 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.740855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.740862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.740867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.740873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.740880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.740886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.740891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.740896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.740903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.740909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.740915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.740920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.740928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.740935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.740940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.740945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.740952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.740958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.740964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.740969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.740976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.740982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.740987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.740993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97336 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.740999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.741006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.741011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.741016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97344 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.741023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.741033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.741038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.741044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97352 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.741050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.741057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.741061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.741067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97360 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.741073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.741080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.741085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.741090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97368 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.741097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.741104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.741109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.741116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97376 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.741122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.741128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.741134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.741139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97384 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.741145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.741152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.741157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.741162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97392 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.741168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.741175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.741180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.741185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97400 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.741191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.741198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.741203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.741208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97408 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.741214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.741221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.741226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.741236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97416 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.741243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.741250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.741255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.741260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97424 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.741267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.741273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.741278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.741283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97432 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.741290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.315 [2024-07-12 14:28:42.741297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.315 [2024-07-12 14:28:42.741303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.315 [2024-07-12 14:28:42.741308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97440 len:8 PRP1 0x0 PRP2 0x0 00:24:05.315 [2024-07-12 14:28:42.741315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.741321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.741326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.741331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97448 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.741338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.741344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.741349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.741354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97456 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.741361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.741367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.741372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.741381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97464 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.741388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.741394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.741399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.741404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97472 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.741411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.741419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.741424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.741430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97480 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.741437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.741443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.741448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.741453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97488 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.741459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.751818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.751831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.751839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97496 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.751848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.751861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.751867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.751874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97504 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.751883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.751891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.751898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.751905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97512 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.751913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.751922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.751928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.751935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97520 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.751943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.751952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.751958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.751965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97528 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.751974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.751982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.751989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.751996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97536 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.752004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.752014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.752020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.752028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97544 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.752036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.752044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.752051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.752058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97552 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.752066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.752074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.752080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.752087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96728 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.752101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.752109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.752116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.752123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96736 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.752131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.752140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.752146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.752153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96744 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.752161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.752170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.752177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.752184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96752 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.752192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.752200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.752207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.752214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96760 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.752222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.752230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.752236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.752243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96768 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.752252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.752260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.316 [2024-07-12 14:28:42.752266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.316 [2024-07-12 14:28:42.752273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96776 len:8 PRP1 0x0 PRP2 0x0 00:24:05.316 [2024-07-12 14:28:42.752282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.752326] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f31300 was disconnected and freed. reset controller. 00:24:05.316 [2024-07-12 14:28:42.752336] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:05.316 [2024-07-12 14:28:42.752360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.316 [2024-07-12 14:28:42.752370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.752386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.316 [2024-07-12 14:28:42.752398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.752407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.316 [2024-07-12 14:28:42.752415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.316 [2024-07-12 14:28:42.752424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.317 [2024-07-12 14:28:42.752433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:42.752441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:05.317 [2024-07-12 14:28:42.752481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f13540 (9): Bad file descriptor 00:24:05.317 [2024-07-12 14:28:42.756185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:05.317 [2024-07-12 14:28:42.791145] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:05.317 [2024-07-12 14:28:46.253263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.317 [2024-07-12 14:28:46.253655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.317 [2024-07-12 14:28:46.253662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.318 [2024-07-12 14:28:46.253676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.318 [2024-07-12 14:28:46.253690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.318 [2024-07-12 14:28:46.253705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.318 [2024-07-12 14:28:46.253719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.318 [2024-07-12 14:28:46.253734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.318 [2024-07-12 14:28:46.253749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.318 [2024-07-12 14:28:46.253764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.253780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.253795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.253811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.253826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.253841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.253856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.253870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.253885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.253898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.253913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.253927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.253942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.253956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.253970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.253984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.253993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.254000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.254008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.254015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.254023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.254029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.254037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.254043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.254051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.254058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.254066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.254072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.254080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.254086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.254094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.254100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.254108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.254114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.254122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.254128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.254136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.254143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.254151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.254157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.254165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.254171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.254181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.254187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.254195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.318 [2024-07-12 14:28:46.254201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.318 [2024-07-12 14:28:46.254209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.319 [2024-07-12 14:28:46.254470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.319 [2024-07-12 14:28:46.254503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21496 len:8 PRP1 0x0 PRP2 0x0 00:24:05.319 [2024-07-12 14:28:46.254510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.319 [2024-07-12 14:28:46.254524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.319 [2024-07-12 14:28:46.254529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:8 PRP1 0x0 PRP2 0x0 00:24:05.319 [2024-07-12 14:28:46.254536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.319 [2024-07-12 14:28:46.254547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.319 [2024-07-12 14:28:46.254553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21512 len:8 PRP1 0x0 PRP2 0x0 00:24:05.319 [2024-07-12 14:28:46.254559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.319 [2024-07-12 14:28:46.254576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.319 [2024-07-12 14:28:46.254581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21520 len:8 PRP1 0x0 PRP2 0x0 00:24:05.319 [2024-07-12 14:28:46.254588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.319 [2024-07-12 14:28:46.254599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.319 [2024-07-12 14:28:46.254604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21528 len:8 PRP1 0x0 PRP2 0x0 00:24:05.319 [2024-07-12 14:28:46.254611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.319 [2024-07-12 14:28:46.254622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.319 [2024-07-12 14:28:46.254628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:8 PRP1 0x0 PRP2 0x0 00:24:05.319 [2024-07-12 14:28:46.254634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.319 [2024-07-12 14:28:46.254645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.319 [2024-07-12 14:28:46.254650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21544 len:8 PRP1 0x0 PRP2 0x0 00:24:05.319 [2024-07-12 14:28:46.254657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.319 [2024-07-12 14:28:46.254668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.319 [2024-07-12 14:28:46.254673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21552 len:8 PRP1 0x0 PRP2 0x0 00:24:05.319 [2024-07-12 14:28:46.254679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.319 [2024-07-12 14:28:46.254691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.319 [2024-07-12 14:28:46.254699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21560 len:8 PRP1 0x0 PRP2 0x0 00:24:05.319 [2024-07-12 14:28:46.254705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.319 [2024-07-12 14:28:46.254716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.319 [2024-07-12 14:28:46.254722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:8 PRP1 0x0 PRP2 0x0 00:24:05.319 [2024-07-12 14:28:46.254729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.319 [2024-07-12 14:28:46.254740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.319 [2024-07-12 14:28:46.254745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21576 len:8 PRP1 0x0 PRP2 0x0 00:24:05.319 [2024-07-12 14:28:46.254752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.319 [2024-07-12 14:28:46.254764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.319 [2024-07-12 14:28:46.254769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21584 len:8 PRP1 0x0 PRP2 0x0 00:24:05.319 [2024-07-12 14:28:46.254776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.319 [2024-07-12 14:28:46.254787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.319 [2024-07-12 14:28:46.254792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21592 len:8 PRP1 0x0 PRP2 0x0 00:24:05.319 [2024-07-12 14:28:46.254799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.319 [2024-07-12 14:28:46.254810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.319 [2024-07-12 14:28:46.254815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:8 PRP1 0x0 PRP2 0x0 00:24:05.319 [2024-07-12 14:28:46.254821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.319 [2024-07-12 14:28:46.254828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.254832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.254838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21608 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.254844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.254851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.254855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.254860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21616 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.254867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.254873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.254878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.254885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21624 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.254891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.254898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.254903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.254909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.254915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.254921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.254926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.254932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21640 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.254939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.254946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.254951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.254956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21648 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.254962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.254969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.254973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.254979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21656 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.254985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.254991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.254997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.255002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.255008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.255015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.255019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.255025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21672 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.255031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.255038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.255042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.255048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21680 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.255054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.255060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.255065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.255072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21688 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.255078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.255085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.255090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.255095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.255102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.255108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.255113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.255119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21704 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.255126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.255132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.255137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.255142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21712 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.255149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.255155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.255160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.255165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21720 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.255171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.255177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.255182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.255189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.255196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.255202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.255207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.255212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21736 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.255218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.255225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.255230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.320 [2024-07-12 14:28:46.255235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21744 len:8 PRP1 0x0 PRP2 0x0 00:24:05.320 [2024-07-12 14:28:46.255241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.320 [2024-07-12 14:28:46.255248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.320 [2024-07-12 14:28:46.255253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.255259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21752 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.255266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.255272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.321 [2024-07-12 14:28:46.255277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.255282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.255288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.255295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.321 [2024-07-12 14:28:46.255301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.255307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21768 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.255313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.255319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.321 [2024-07-12 14:28:46.255324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.255329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21776 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.255335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.255342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.321 [2024-07-12 14:28:46.255347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.255352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21784 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.255358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.255365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.321 [2024-07-12 14:28:46.266210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.266224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.266235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.266245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.321 [2024-07-12 14:28:46.266253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.266261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21800 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.266269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.266278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.321 [2024-07-12 14:28:46.266285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.266292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21808 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.266301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.266310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.321 [2024-07-12 14:28:46.266316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.266324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21816 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.266333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.266342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.321 [2024-07-12 14:28:46.266349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.266356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.266364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.266375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.321 [2024-07-12 14:28:46.266387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.266394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21832 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.266403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.266412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.321 [2024-07-12 14:28:46.266418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.266426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21840 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.266435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.266444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.321 [2024-07-12 14:28:46.266451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.266458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21848 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.266467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.266476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.321 [2024-07-12 14:28:46.266483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.266490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.266499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.266508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.321 [2024-07-12 14:28:46.266515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.266522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21864 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.266530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.266539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.321 [2024-07-12 14:28:46.266546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.266554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21096 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.266562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.266572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.321 [2024-07-12 14:28:46.266579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.321 [2024-07-12 14:28:46.266586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21104 len:8 PRP1 0x0 PRP2 0x0 00:24:05.321 [2024-07-12 14:28:46.266594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.266642] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20de380 was disconnected and freed. reset controller. 00:24:05.321 [2024-07-12 14:28:46.266653] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:05.321 [2024-07-12 14:28:46.266680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.321 [2024-07-12 14:28:46.266690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.266700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.321 [2024-07-12 14:28:46.266710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.266719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.321 [2024-07-12 14:28:46.266728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.266737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.321 [2024-07-12 14:28:46.266746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:46.266754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:05.321 [2024-07-12 14:28:46.266789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f13540 (9): Bad file descriptor 00:24:05.321 [2024-07-12 14:28:46.270645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:05.321 [2024-07-12 14:28:46.428310] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:05.321 [2024-07-12 14:28:50.654561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.321 [2024-07-12 14:28:50.654601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:50.654616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.321 [2024-07-12 14:28:50.654625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.321 [2024-07-12 14:28:50.654634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.321 [2024-07-12 14:28:50.654641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.654991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.654997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.655005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.655012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.655019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.655025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.655034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.655041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.655049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.655056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.655064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.655070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.655078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.655086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.655094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.655101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.655109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.655115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.655123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.655129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.655137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.655143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.655151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.655157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.655166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.322 [2024-07-12 14:28:50.655173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.655181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.322 [2024-07-12 14:28:50.655187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.322 [2024-07-12 14:28:50.655195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.323 [2024-07-12 14:28:50.655689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.323 [2024-07-12 14:28:50.655695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.655990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.655998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.324 [2024-07-12 14:28:50.656309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.324 [2024-07-12 14:28:50.656315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.325 [2024-07-12 14:28:50.656323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.325 [2024-07-12 14:28:50.656329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.325 [2024-07-12 14:28:50.656355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.325 [2024-07-12 14:28:50.656363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60576 len:8 PRP1 0x0 PRP2 0x0 00:24:05.325 [2024-07-12 14:28:50.656369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.325 [2024-07-12 14:28:50.656382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.325 [2024-07-12 14:28:50.656389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.325 [2024-07-12 14:28:50.656394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60584 len:8 PRP1 0x0 PRP2 0x0 00:24:05.325 [2024-07-12 14:28:50.656401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.325 [2024-07-12 14:28:50.656408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.325 [2024-07-12 14:28:50.656413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.325 [2024-07-12 14:28:50.656418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60592 len:8 PRP1 0x0 PRP2 0x0 00:24:05.325 [2024-07-12 14:28:50.656424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.325 [2024-07-12 14:28:50.656431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.325 [2024-07-12 14:28:50.656436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.325 [2024-07-12 14:28:50.656441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60600 len:8 PRP1 0x0 PRP2 0x0 00:24:05.325 [2024-07-12 14:28:50.656447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.325 [2024-07-12 14:28:50.656458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.325 [2024-07-12 14:28:50.656463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.325 [2024-07-12 14:28:50.656468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60608 len:8 PRP1 0x0 PRP2 0x0 00:24:05.325 [2024-07-12 14:28:50.656474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.325 [2024-07-12 14:28:50.656481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.325 [2024-07-12 14:28:50.656486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.325 [2024-07-12 14:28:50.656491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60616 len:8 PRP1 0x0 PRP2 0x0 00:24:05.325 [2024-07-12 14:28:50.656497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.325 [2024-07-12 14:28:50.656503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.325 [2024-07-12 14:28:50.656508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.325 [2024-07-12 14:28:50.656515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60624 len:8 PRP1 0x0 PRP2 0x0 00:24:05.325 [2024-07-12 14:28:50.656521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.325 [2024-07-12 14:28:50.656528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.325 [2024-07-12 14:28:50.656533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.325 [2024-07-12 14:28:50.656538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60632 len:8 PRP1 0x0 PRP2 0x0 00:24:05.325 [2024-07-12 14:28:50.656544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.325 [2024-07-12 14:28:50.656550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:05.325 [2024-07-12 14:28:50.656555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:05.325 [2024-07-12 14:28:50.656560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59928 len:8 PRP1 0x0 PRP2 0x0 00:24:05.325 [2024-07-12 14:28:50.656566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.325 [2024-07-12 14:28:50.656608] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20de170 was disconnected and freed. reset controller. 00:24:05.325 [2024-07-12 14:28:50.656617] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:05.325 [2024-07-12 14:28:50.656636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.325 [2024-07-12 14:28:50.656644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.325 [2024-07-12 14:28:50.656651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.325 [2024-07-12 14:28:50.656658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.325 [2024-07-12 14:28:50.656665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.325 [2024-07-12 14:28:50.656671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.325 [2024-07-12 14:28:50.656678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.325 [2024-07-12 14:28:50.656684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.325 [2024-07-12 14:28:50.656690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:05.325 [2024-07-12 14:28:50.656710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f13540 (9): Bad file descriptor 00:24:05.325 [2024-07-12 14:28:50.659533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:05.325 [2024-07-12 14:28:50.688472] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:05.325 00:24:05.325 Latency(us) 00:24:05.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.325 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:05.325 Verification LBA range: start 0x0 length 0x4000 00:24:05.325 NVMe0n1 : 15.01 10896.86 42.57 653.90 0.00 11059.23 423.85 21655.37 00:24:05.325 =================================================================================================================== 00:24:05.325 Total : 10896.86 42.57 653.90 0.00 11059.23 423.85 21655.37 00:24:05.325 Received shutdown signal, test time was about 15.000000 seconds 00:24:05.325 00:24:05.325 Latency(us) 00:24:05.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.325 =================================================================================================================== 00:24:05.325 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.325 14:28:56 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:05.325 14:28:56 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:05.325 14:28:56 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:05.325 14:28:56 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2644126 00:24:05.325 14:28:56 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2644126 /var/tmp/bdevperf.sock 00:24:05.325 14:28:56 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:05.325 14:28:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2644126 ']' 00:24:05.325 14:28:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.325 14:28:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.325 14:28:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.325 14:28:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.325 14:28:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:05.893 14:28:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.893 14:28:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:05.893 14:28:57 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:06.152 [2024-07-12 14:28:57.959364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:06.152 14:28:57 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:06.152 [2024-07-12 14:28:58.139861] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:06.411 14:28:58 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:06.670 NVMe0n1 00:24:06.670 14:28:58 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:06.929 00:24:06.929 14:28:58 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:07.187 00:24:07.187 14:28:59 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:07.187 14:28:59 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:07.445 14:28:59 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:07.445 14:28:59 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:10.731 14:29:02 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:10.731 14:29:02 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:10.731 14:29:02 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2645029 00:24:10.731 14:29:02 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:10.731 14:29:02 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2645029 00:24:11.687 0 00:24:11.687 14:29:03 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:11.687 [2024-07-12 14:28:56.996244] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:24:11.687 [2024-07-12 14:28:56.996298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2644126 ] 00:24:11.687 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.687 [2024-07-12 14:28:57.052234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.687 [2024-07-12 14:28:57.121779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.687 [2024-07-12 14:28:59.352041] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:11.687 [2024-07-12 14:28:59.352085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.687 [2024-07-12 14:28:59.352096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.687 [2024-07-12 14:28:59.352104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.687 [2024-07-12 14:28:59.352111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.687 [2024-07-12 14:28:59.352119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.688 [2024-07-12 14:28:59.352125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.688 [2024-07-12 14:28:59.352132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.688 [2024-07-12 14:28:59.352138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.688 [2024-07-12 14:28:59.352145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:11.688 [2024-07-12 14:28:59.352170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:11.688 [2024-07-12 14:28:59.352183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126a540 (9): Bad file descriptor 00:24:11.688 [2024-07-12 14:28:59.357628] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:11.688 Running I/O for 1 seconds... 00:24:11.688 00:24:11.688 Latency(us) 00:24:11.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.688 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:11.688 Verification LBA range: start 0x0 length 0x4000 00:24:11.688 NVMe0n1 : 1.00 10926.76 42.68 0.00 0.00 11669.73 740.84 15842.62 00:24:11.688 =================================================================================================================== 00:24:11.688 Total : 10926.76 42.68 0.00 0.00 11669.73 740.84 15842.62 00:24:11.688 14:29:03 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:11.688 14:29:03 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:11.946 14:29:03 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:12.204 14:29:04 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:12.204 14:29:04 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:12.462 14:29:04 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:12.462 14:29:04 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:15.773 14:29:07 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:15.773 14:29:07 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:15.773 14:29:07 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2644126 00:24:15.773 14:29:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2644126 ']' 00:24:15.773 14:29:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2644126 00:24:15.773 14:29:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:15.773 14:29:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:15.773 14:29:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2644126 00:24:15.773 14:29:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:15.773 14:29:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:15.773 14:29:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2644126' 00:24:15.773 killing process with pid 2644126 00:24:15.774 14:29:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2644126 00:24:15.774 14:29:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2644126 00:24:16.033 14:29:07 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:16.033 14:29:07 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:16.033 14:29:08 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:16.033 14:29:08 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.292 rmmod nvme_tcp 00:24:16.292 rmmod nvme_fabrics 00:24:16.292 rmmod nvme_keyring 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2641010 ']' 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2641010 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2641010 ']' 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2641010 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2641010 00:24:16.292 14:29:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:16.293 14:29:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:16.293 14:29:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2641010' 00:24:16.293 killing process with pid 2641010 00:24:16.293 14:29:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2641010 00:24:16.293 14:29:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2641010 00:24:16.552 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:16.552 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:16.552 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:16.552 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:16.552 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:16.552 14:29:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.552 14:29:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.552 14:29:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.533 14:29:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:18.533 00:24:18.533 real 0m37.516s 00:24:18.533 user 2m2.483s 00:24:18.533 sys 0m6.991s 00:24:18.533 14:29:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:18.533 14:29:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:18.533 ************************************ 00:24:18.533 END TEST nvmf_failover 00:24:18.533 ************************************ 00:24:18.533 14:29:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:18.533 14:29:10 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:18.533 14:29:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:18.533 14:29:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:18.533 14:29:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:18.533 ************************************ 00:24:18.533 START TEST nvmf_host_discovery 00:24:18.533 ************************************ 00:24:18.533 14:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:18.793 * Looking for test storage... 00:24:18.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.793 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:18.794 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:18.794 14:29:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:24:18.794 14:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:24.067 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:24.067 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:24.067 Found net devices under 0000:86:00.0: cvl_0_0 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:24.067 Found net devices under 0000:86:00.1: cvl_0_1 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:24.067 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:24.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:24:24.068 00:24:24.068 --- 10.0.0.2 ping statistics --- 00:24:24.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.068 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:24:24.068 00:24:24.068 --- 10.0.0.1 ping statistics --- 00:24:24.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.068 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:24.068 14:29:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:24.068 14:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:24.068 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:24.068 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.068 14:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2649403 00:24:24.068 14:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2649403 00:24:24.068 14:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:24.068 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2649403 ']' 00:24:24.068 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.068 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:24.068 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.068 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:24.068 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.068 [2024-07-12 14:29:16.053576] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:24:24.068 [2024-07-12 14:29:16.053622] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.328 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.328 [2024-07-12 14:29:16.110313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.328 [2024-07-12 14:29:16.189759] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.328 [2024-07-12 14:29:16.189794] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.328 [2024-07-12 14:29:16.189801] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.328 [2024-07-12 14:29:16.189807] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.328 [2024-07-12 14:29:16.189812] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.328 [2024-07-12 14:29:16.189846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.896 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.896 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:24.896 14:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:24.897 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:24.897 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.897 14:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.897 14:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:24.897 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.897 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.897 [2024-07-12 14:29:16.892780] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.897 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.897 14:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:24.897 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.897 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.897 [2024-07-12 14:29:16.900906] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.156 null0 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.156 null1 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2649636 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2649636 /tmp/host.sock 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2649636 ']' 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:25.156 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:25.156 14:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.156 [2024-07-12 14:29:16.972691] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:24:25.156 [2024-07-12 14:29:16.972730] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2649636 ] 00:24:25.156 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.156 [2024-07-12 14:29:17.026234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.156 [2024-07-12 14:29:17.106503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.092 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.093 14:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:26.093 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.352 [2024-07-12 14:29:18.120100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.352 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.353 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:26.353 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:26.353 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:26.353 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:26.353 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:26.353 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:26.353 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.353 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.353 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.353 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.353 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.353 14:29:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.353 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.353 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:24:26.353 14:29:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:26.921 [2024-07-12 14:29:18.840965] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:26.921 [2024-07-12 14:29:18.840984] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:26.921 [2024-07-12 14:29:18.840995] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:27.179 [2024-07-12 14:29:18.969405] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:27.179 [2024-07-12 14:29:19.073396] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:27.179 [2024-07-12 14:29:19.073415] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:27.438 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.697 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.957 [2024-07-12 14:29:19.820823] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:27.957 [2024-07-12 14:29:19.821440] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:27.957 [2024-07-12 14:29:19.821462] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.957 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.958 [2024-07-12 14:29:19.908042] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.958 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.216 [2024-07-12 14:29:19.966428] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:28.216 [2024-07-12 14:29:19.966445] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:28.216 [2024-07-12 14:29:19.966451] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:28.216 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:28.216 14:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:29.154 14:29:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:29.154 14:29:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:29.154 14:29:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:29.154 14:29:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:29.154 14:29:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:29.154 14:29:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.154 14:29:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:29.154 14:29:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.154 14:29:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:29.154 14:29:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.154 [2024-07-12 14:29:21.069026] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:29.154 [2024-07-12 14:29:21.069047] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:29.154 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:29.154 [2024-07-12 14:29:21.074483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.154 [2024-07-12 14:29:21.074501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.154 [2024-07-12 14:29:21.074510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.154 [2024-07-12 14:29:21.074517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.154 [2024-07-12 14:29:21.074524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.155 [2024-07-12 14:29:21.074531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.155 [2024-07-12 14:29:21.074538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.155 [2024-07-12 14:29:21.074544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.155 [2024-07-12 14:29:21.074555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd4ff10 is same with the state(5) to be set 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:29.155 [2024-07-12 14:29:21.084495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4ff10 (9): Bad file descriptor 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.155 [2024-07-12 14:29:21.094532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:29.155 [2024-07-12 14:29:21.094801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.155 [2024-07-12 14:29:21.094814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4ff10 with addr=10.0.0.2, port=4420 00:24:29.155 [2024-07-12 14:29:21.094822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd4ff10 is same with the state(5) to be set 00:24:29.155 [2024-07-12 14:29:21.094833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4ff10 (9): Bad file descriptor 00:24:29.155 [2024-07-12 14:29:21.094842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:29.155 [2024-07-12 14:29:21.094848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:29.155 [2024-07-12 14:29:21.094855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:29.155 [2024-07-12 14:29:21.094865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.155 [2024-07-12 14:29:21.104586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:29.155 [2024-07-12 14:29:21.104781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.155 [2024-07-12 14:29:21.104796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4ff10 with addr=10.0.0.2, port=4420 00:24:29.155 [2024-07-12 14:29:21.104803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd4ff10 is same with the state(5) to be set 00:24:29.155 [2024-07-12 14:29:21.104814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4ff10 (9): Bad file descriptor 00:24:29.155 [2024-07-12 14:29:21.104822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:29.155 [2024-07-12 14:29:21.104828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:29.155 [2024-07-12 14:29:21.104834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:29.155 [2024-07-12 14:29:21.104843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.155 [2024-07-12 14:29:21.114636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:29.155 [2024-07-12 14:29:21.114822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.155 [2024-07-12 14:29:21.114835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4ff10 with addr=10.0.0.2, port=4420 00:24:29.155 [2024-07-12 14:29:21.114843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd4ff10 is same with the state(5) to be set 00:24:29.155 [2024-07-12 14:29:21.114853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4ff10 (9): Bad file descriptor 00:24:29.155 [2024-07-12 14:29:21.114862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:29.155 [2024-07-12 14:29:21.114868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:29.155 [2024-07-12 14:29:21.114874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:29.155 [2024-07-12 14:29:21.114883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:29.155 [2024-07-12 14:29:21.124690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:29.155 [2024-07-12 14:29:21.124805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.155 [2024-07-12 14:29:21.124821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4ff10 with addr=10.0.0.2, port=4420 00:24:29.155 [2024-07-12 14:29:21.124828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd4ff10 is same with the state(5) to be set 00:24:29.155 [2024-07-12 14:29:21.124838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4ff10 (9): Bad file descriptor 00:24:29.155 [2024-07-12 14:29:21.124847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:29.155 [2024-07-12 14:29:21.124853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:29.155 [2024-07-12 14:29:21.124860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:29.155 [2024-07-12 14:29:21.124869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:29.155 [2024-07-12 14:29:21.134745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:29.155 [2024-07-12 14:29:21.134854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.155 [2024-07-12 14:29:21.134867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4ff10 with addr=10.0.0.2, port=4420 00:24:29.155 [2024-07-12 14:29:21.134874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd4ff10 is same with the state(5) to be set 00:24:29.155 [2024-07-12 14:29:21.134884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4ff10 (9): Bad file descriptor 00:24:29.155 [2024-07-12 14:29:21.134894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:29.155 [2024-07-12 14:29:21.134899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:29.155 [2024-07-12 14:29:21.134906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:29.155 [2024-07-12 14:29:21.134915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.155 [2024-07-12 14:29:21.144799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:29.155 [2024-07-12 14:29:21.144992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.155 [2024-07-12 14:29:21.145004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4ff10 with addr=10.0.0.2, port=4420 00:24:29.155 [2024-07-12 14:29:21.145010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd4ff10 is same with the state(5) to be set 00:24:29.155 [2024-07-12 14:29:21.145020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4ff10 (9): Bad file descriptor 00:24:29.155 [2024-07-12 14:29:21.145029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:29.155 [2024-07-12 14:29:21.145035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:29.155 [2024-07-12 14:29:21.145041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:29.155 [2024-07-12 14:29:21.145050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.155 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.155 [2024-07-12 14:29:21.154728] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:29.155 [2024-07-12 14:29:21.154745] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:29.415 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:29.416 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:29.416 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:29.416 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:29.416 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.416 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.416 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.674 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:29.674 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:29.674 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:29.674 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:29.674 14:29:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:29.674 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.674 14:29:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.611 [2024-07-12 14:29:22.490515] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:30.611 [2024-07-12 14:29:22.490532] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:30.611 [2024-07-12 14:29:22.490543] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:30.611 [2024-07-12 14:29:22.576801] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:30.870 [2024-07-12 14:29:22.840689] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:30.870 [2024-07-12 14:29:22.840716] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.870 request: 00:24:30.870 { 00:24:30.870 "name": "nvme", 00:24:30.870 "trtype": "tcp", 00:24:30.870 "traddr": "10.0.0.2", 00:24:30.870 "adrfam": "ipv4", 00:24:30.870 "trsvcid": "8009", 00:24:30.870 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:30.870 "wait_for_attach": true, 00:24:30.870 "method": "bdev_nvme_start_discovery", 00:24:30.870 "req_id": 1 00:24:30.870 } 00:24:30.870 Got JSON-RPC error response 00:24:30.870 response: 00:24:30.870 { 00:24:30.870 "code": -17, 00:24:30.870 "message": "File exists" 00:24:30.870 } 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:30.870 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.129 request: 00:24:31.129 { 00:24:31.129 "name": "nvme_second", 00:24:31.129 "trtype": "tcp", 00:24:31.129 "traddr": "10.0.0.2", 00:24:31.129 "adrfam": "ipv4", 00:24:31.129 "trsvcid": "8009", 00:24:31.129 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:31.129 "wait_for_attach": true, 00:24:31.129 "method": "bdev_nvme_start_discovery", 00:24:31.129 "req_id": 1 00:24:31.129 } 00:24:31.129 Got JSON-RPC error response 00:24:31.129 response: 00:24:31.129 { 00:24:31.129 "code": -17, 00:24:31.129 "message": "File exists" 00:24:31.129 } 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:31.129 14:29:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.129 14:29:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:31.129 14:29:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:31.129 14:29:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.129 14:29:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:31.129 14:29:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.129 14:29:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:31.129 14:29:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.129 14:29:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:31.129 14:29:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.129 14:29:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:31.129 14:29:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:31.130 14:29:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:31.130 14:29:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:31.130 14:29:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:31.130 14:29:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.130 14:29:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:31.130 14:29:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.130 14:29:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:31.130 14:29:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.130 14:29:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.506 [2024-07-12 14:29:24.081857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.506 [2024-07-12 14:29:24.081883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6a100 with addr=10.0.0.2, port=8010 00:24:32.506 [2024-07-12 14:29:24.081894] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:32.506 [2024-07-12 14:29:24.081900] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:32.506 [2024-07-12 14:29:24.081907] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:33.444 [2024-07-12 14:29:25.084283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.444 [2024-07-12 14:29:25.084306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8ca00 with addr=10.0.0.2, port=8010 00:24:33.444 [2024-07-12 14:29:25.084316] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:33.444 [2024-07-12 14:29:25.084322] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:33.444 [2024-07-12 14:29:25.084329] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:34.381 [2024-07-12 14:29:26.086493] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:34.381 request: 00:24:34.381 { 00:24:34.381 "name": "nvme_second", 00:24:34.381 "trtype": "tcp", 00:24:34.381 "traddr": "10.0.0.2", 00:24:34.381 "adrfam": "ipv4", 00:24:34.381 "trsvcid": "8010", 00:24:34.381 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:34.381 "wait_for_attach": false, 00:24:34.381 "attach_timeout_ms": 3000, 00:24:34.381 "method": "bdev_nvme_start_discovery", 00:24:34.381 "req_id": 1 00:24:34.381 } 00:24:34.381 Got JSON-RPC error response 00:24:34.381 response: 00:24:34.381 { 00:24:34.381 "code": -110, 00:24:34.381 "message": "Connection timed out" 00:24:34.381 } 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2649636 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:34.381 rmmod nvme_tcp 00:24:34.381 rmmod nvme_fabrics 00:24:34.381 rmmod nvme_keyring 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2649403 ']' 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2649403 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2649403 ']' 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2649403 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2649403 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2649403' 00:24:34.381 killing process with pid 2649403 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2649403 00:24:34.381 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2649403 00:24:34.641 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:34.641 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:34.641 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:34.641 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:34.641 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:34.641 14:29:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.641 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.641 14:29:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.547 14:29:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:36.547 00:24:36.547 real 0m18.005s 00:24:36.547 user 0m22.852s 00:24:36.547 sys 0m5.333s 00:24:36.547 14:29:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:36.547 14:29:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.547 ************************************ 00:24:36.547 END TEST nvmf_host_discovery 00:24:36.547 ************************************ 00:24:36.547 14:29:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:36.547 14:29:28 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:36.547 14:29:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:36.547 14:29:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:36.547 14:29:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:36.806 ************************************ 00:24:36.806 START TEST nvmf_host_multipath_status 00:24:36.806 ************************************ 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:36.806 * Looking for test storage... 00:24:36.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.806 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:36.807 14:29:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:42.154 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:42.154 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:42.154 Found net devices under 0000:86:00.0: cvl_0_0 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:42.154 Found net devices under 0000:86:00.1: cvl_0_1 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:42.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:24:42.154 00:24:42.154 --- 10.0.0.2 ping statistics --- 00:24:42.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.154 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:24:42.154 00:24:42.154 --- 10.0.0.1 ping statistics --- 00:24:42.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.154 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:42.154 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2654591 00:24:42.155 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2654591 00:24:42.155 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2654591 ']' 00:24:42.155 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.155 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:42.155 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.155 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:42.155 14:29:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:42.155 [2024-07-12 14:29:33.756932] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:24:42.155 [2024-07-12 14:29:33.756975] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.155 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.155 [2024-07-12 14:29:33.810167] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:42.155 [2024-07-12 14:29:33.889792] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.155 [2024-07-12 14:29:33.889829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.155 [2024-07-12 14:29:33.889836] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.155 [2024-07-12 14:29:33.889842] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.155 [2024-07-12 14:29:33.889848] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.155 [2024-07-12 14:29:33.889889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.155 [2024-07-12 14:29:33.889892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.721 14:29:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:42.721 14:29:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:42.721 14:29:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:42.721 14:29:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:42.721 14:29:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:42.721 14:29:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.721 14:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2654591 00:24:42.721 14:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:43.031 [2024-07-12 14:29:34.761701] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.031 14:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:43.031 Malloc0 00:24:43.031 14:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:43.289 14:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:43.547 14:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:43.547 [2024-07-12 14:29:35.489462] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.547 14:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:43.804 [2024-07-12 14:29:35.657907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:43.804 14:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:43.804 14:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2654974 00:24:43.804 14:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:43.804 14:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2654974 /var/tmp/bdevperf.sock 00:24:43.804 14:29:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2654974 ']' 00:24:43.804 14:29:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.804 14:29:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.804 14:29:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.804 14:29:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.804 14:29:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:44.738 14:29:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.738 14:29:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:44.738 14:29:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:44.738 14:29:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:44.997 Nvme0n1 00:24:44.997 14:29:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:45.562 Nvme0n1 00:24:45.562 14:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:45.562 14:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:47.463 14:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:47.463 14:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:47.720 14:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:47.978 14:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:48.915 14:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:48.915 14:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:48.915 14:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.915 14:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:49.174 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.174 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:49.174 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.174 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:49.433 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:49.433 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:49.433 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.433 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:49.433 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.433 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:49.433 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.433 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:49.691 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.691 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:49.691 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.691 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:49.950 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.950 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:49.950 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:49.950 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.950 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.950 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:49.950 14:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:50.209 14:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:50.468 14:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:51.403 14:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:51.403 14:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:51.403 14:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.403 14:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:51.661 14:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:51.661 14:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:51.661 14:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.661 14:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:51.919 14:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.919 14:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:51.919 14:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.920 14:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:51.920 14:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.920 14:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:51.920 14:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.920 14:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:52.178 14:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.178 14:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:52.178 14:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.178 14:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:52.436 14:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.436 14:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:52.436 14:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.436 14:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:52.436 14:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.436 14:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:52.436 14:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:52.694 14:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:52.953 14:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:53.890 14:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:53.890 14:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:53.890 14:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.890 14:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:54.149 14:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.149 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:54.149 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.149 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:54.408 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.408 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:54.408 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.408 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:54.408 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.408 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:54.408 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.408 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:54.666 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.666 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:54.666 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.666 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:54.925 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.925 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:54.925 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.925 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:55.184 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.184 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:55.184 14:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:55.184 14:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:55.443 14:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:56.378 14:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:56.378 14:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:56.378 14:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.379 14:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:56.637 14:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.637 14:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:56.637 14:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.637 14:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:56.896 14:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:56.896 14:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:56.896 14:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.896 14:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:56.896 14:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.896 14:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:56.896 14:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.896 14:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:57.155 14:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.155 14:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:57.155 14:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:57.155 14:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.413 14:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.413 14:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:57.413 14:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.414 14:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:57.672 14:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:57.672 14:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:57.672 14:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:57.673 14:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:57.932 14:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:58.869 14:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:58.869 14:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:58.869 14:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.869 14:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:59.129 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.129 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:59.129 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.129 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:59.390 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.390 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:59.390 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:59.390 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.390 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.390 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:59.390 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.390 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:59.692 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.692 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:59.692 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:59.692 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.952 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.952 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:59.953 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.953 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:59.953 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.953 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:59.953 14:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:00.211 14:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:00.469 14:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:01.405 14:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:01.405 14:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:01.405 14:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.405 14:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:01.663 14:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:01.663 14:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:01.663 14:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.664 14:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:01.664 14:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.664 14:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:01.664 14:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.664 14:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:01.922 14:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.922 14:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:01.922 14:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.922 14:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:02.182 14:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.182 14:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:02.182 14:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.182 14:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:02.441 14:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:02.441 14:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:02.441 14:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.441 14:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:02.441 14:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.441 14:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:02.699 14:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:02.699 14:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:02.958 14:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:02.958 14:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:04.336 14:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:04.336 14:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:04.336 14:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.336 14:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:04.336 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.336 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:04.336 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.336 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:04.336 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.336 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:04.336 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.336 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:04.596 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.596 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:04.596 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.596 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:04.855 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.855 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:04.855 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.855 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:05.114 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.114 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:05.114 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.114 14:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:05.114 14:29:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.114 14:29:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:05.114 14:29:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:05.373 14:29:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:05.632 14:29:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:06.570 14:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:06.570 14:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:06.570 14:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.570 14:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:06.828 14:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:06.828 14:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:06.828 14:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.828 14:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:06.828 14:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.828 14:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:06.828 14:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.828 14:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:07.087 14:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.087 14:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:07.087 14:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.087 14:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:07.346 14:29:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.346 14:29:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:07.346 14:29:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.346 14:29:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:07.346 14:29:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.346 14:29:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:07.346 14:29:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.346 14:29:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:07.604 14:29:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.604 14:29:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:07.604 14:29:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:07.862 14:29:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:08.120 14:29:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:09.056 14:30:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:09.056 14:30:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:09.056 14:30:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.056 14:30:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:09.316 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.316 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:09.316 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.316 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:09.316 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.316 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:09.316 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.316 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:09.575 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.575 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:09.575 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:09.575 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.834 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.834 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:09.834 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:09.834 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.091 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.091 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:10.091 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.091 14:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:10.091 14:30:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.091 14:30:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:10.091 14:30:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:10.349 14:30:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:10.608 14:30:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:11.542 14:30:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:11.542 14:30:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:11.542 14:30:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.542 14:30:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:11.801 14:30:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.801 14:30:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:11.801 14:30:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.801 14:30:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:12.059 14:30:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:12.059 14:30:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:12.059 14:30:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.059 14:30:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:12.059 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.059 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:12.060 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.060 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:12.317 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.317 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:12.317 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.317 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:12.574 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.574 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:12.574 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.574 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:12.834 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:12.834 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2654974 00:25:12.835 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2654974 ']' 00:25:12.835 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2654974 00:25:12.835 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:12.835 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:12.835 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2654974 00:25:12.835 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:12.835 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:12.835 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2654974' 00:25:12.835 killing process with pid 2654974 00:25:12.835 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2654974 00:25:12.835 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2654974 00:25:12.835 Connection closed with partial response: 00:25:12.835 00:25:12.835 00:25:12.835 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2654974 00:25:12.835 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:12.835 [2024-07-12 14:29:35.720980] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:25:12.835 [2024-07-12 14:29:35.721035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2654974 ] 00:25:12.835 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.835 [2024-07-12 14:29:35.771544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.835 [2024-07-12 14:29:35.845740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:12.835 Running I/O for 90 seconds... 00:25:12.835 [2024-07-12 14:29:49.634221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.835 [2024-07-12 14:29:49.634448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.835 [2024-07-12 14:29:49.634769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:12.835 [2024-07-12 14:29:49.634782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.634789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.635984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.635990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.636005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.636012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.636027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.636034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.636048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.636055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.636070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.636078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.636093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.636099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.636114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.636123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.636139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.636145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.636161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.636168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.636183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.636191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.636205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.636212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.636226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.636234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.636248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.636255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.636270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.636276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:12.836 [2024-07-12 14:29:49.636292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.836 [2024-07-12 14:29:49.636299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.636785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.837 [2024-07-12 14:29:49.636809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:33880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.837 [2024-07-12 14:29:49.636874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.837 [2024-07-12 14:29:49.636901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.837 [2024-07-12 14:29:49.636927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.837 [2024-07-12 14:29:49.636952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.837 [2024-07-12 14:29:49.636977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.636996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.837 [2024-07-12 14:29:49.637003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.637020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.637027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.637047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.637054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.637072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.637079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.637097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.637104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.637123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.637130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.637148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.637155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.637173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.637180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:12.837 [2024-07-12 14:29:49.637198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.837 [2024-07-12 14:29:49.637205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:29:49.637887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:29:49.637894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:30:02.436568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:30:02.436613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:30:02.436644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:30:02.436654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:30:02.436667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:30:02.436674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:30:02.436687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:30:02.436694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:30:02.436706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:30:02.436713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:30:02.436726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:30:02.436733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:30:02.436745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:30:02.436751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:30:02.436764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:30:02.436771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:30:02.436783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:30:02.436795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:30:02.438383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:30:02.438405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:30:02.438423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.838 [2024-07-12 14:30:02.438430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:12.838 [2024-07-12 14:30:02.438444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.438803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.839 [2024-07-12 14:30:02.438823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.438836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.839 [2024-07-12 14:30:02.438842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.439582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.439596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.439611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.439621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.439634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.439641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.439653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.439660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.439673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.439680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.439692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.439699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.439711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.439718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.439730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.439737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.439750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.439756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.439768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.439775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.439789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.439796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:12.839 [2024-07-12 14:30:02.439808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.839 [2024-07-12 14:30:02.439815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:12.840 [2024-07-12 14:30:02.439828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.840 [2024-07-12 14:30:02.439835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:12.840 [2024-07-12 14:30:02.439848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.840 [2024-07-12 14:30:02.439856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:12.840 [2024-07-12 14:30:02.439869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.840 [2024-07-12 14:30:02.439876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:12.840 [2024-07-12 14:30:02.439888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.840 [2024-07-12 14:30:02.439895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:12.840 [2024-07-12 14:30:02.439907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.840 [2024-07-12 14:30:02.439914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:12.840 [2024-07-12 14:30:02.439927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.840 [2024-07-12 14:30:02.439933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:12.840 [2024-07-12 14:30:02.439946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.840 [2024-07-12 14:30:02.439953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:12.840 [2024-07-12 14:30:02.439965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.840 [2024-07-12 14:30:02.439972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:12.840 [2024-07-12 14:30:02.439984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.840 [2024-07-12 14:30:02.439992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:12.840 Received shutdown signal, test time was about 27.074075 seconds 00:25:12.840 00:25:12.840 Latency(us) 00:25:12.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.840 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:12.840 Verification LBA range: start 0x0 length 0x4000 00:25:12.840 Nvme0n1 : 27.07 10326.33 40.34 0.00 0.00 12375.61 470.15 3019898.88 00:25:12.840 =================================================================================================================== 00:25:12.840 Total : 10326.33 40.34 0.00 0.00 12375.61 470.15 3019898.88 00:25:12.840 14:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:13.098 rmmod nvme_tcp 00:25:13.098 rmmod nvme_fabrics 00:25:13.098 rmmod nvme_keyring 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2654591 ']' 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2654591 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2654591 ']' 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2654591 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:13.098 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2654591 00:25:13.356 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:13.356 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:13.356 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2654591' 00:25:13.356 killing process with pid 2654591 00:25:13.356 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2654591 00:25:13.356 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2654591 00:25:13.356 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:13.356 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:13.356 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:13.356 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:13.356 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:13.356 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.356 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.356 14:30:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.893 14:30:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:15.893 00:25:15.893 real 0m38.822s 00:25:15.893 user 1m46.055s 00:25:15.893 sys 0m10.137s 00:25:15.893 14:30:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:15.893 14:30:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:15.893 ************************************ 00:25:15.893 END TEST nvmf_host_multipath_status 00:25:15.893 ************************************ 00:25:15.893 14:30:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:15.893 14:30:07 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:15.893 14:30:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:15.893 14:30:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:15.893 14:30:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:15.893 ************************************ 00:25:15.893 START TEST nvmf_discovery_remove_ifc 00:25:15.893 ************************************ 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:15.893 * Looking for test storage... 00:25:15.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:25:15.893 14:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:21.226 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:21.226 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:21.226 Found net devices under 0000:86:00.0: cvl_0_0 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:21.226 Found net devices under 0000:86:00.1: cvl_0_1 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:21.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:25:21.226 00:25:21.226 --- 10.0.0.2 ping statistics --- 00:25:21.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.226 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:25:21.226 00:25:21.226 --- 10.0.0.1 ping statistics --- 00:25:21.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.226 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2663792 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2663792 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2663792 ']' 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.226 14:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.226 [2024-07-12 14:30:12.931336] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:25:21.226 [2024-07-12 14:30:12.931381] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.226 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.226 [2024-07-12 14:30:12.987863] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.226 [2024-07-12 14:30:13.066460] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.226 [2024-07-12 14:30:13.066495] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.226 [2024-07-12 14:30:13.066502] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.226 [2024-07-12 14:30:13.066508] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.226 [2024-07-12 14:30:13.066513] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.227 [2024-07-12 14:30:13.066529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.793 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.793 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:21.793 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:21.793 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:21.793 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.793 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.793 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:21.793 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.793 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.793 [2024-07-12 14:30:13.780448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.793 [2024-07-12 14:30:13.788567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:21.793 null0 00:25:22.052 [2024-07-12 14:30:13.820578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.052 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.052 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2664031 00:25:22.052 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:22.052 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2664031 /tmp/host.sock 00:25:22.052 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2664031 ']' 00:25:22.052 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:22.052 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.052 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:22.052 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:22.052 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.052 14:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.052 [2024-07-12 14:30:13.886729] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:25:22.052 [2024-07-12 14:30:13.886769] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2664031 ] 00:25:22.052 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.052 [2024-07-12 14:30:13.941641] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.052 [2024-07-12 14:30:14.021285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.988 14:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.988 14:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:22.988 14:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:22.988 14:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:22.988 14:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.988 14:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.988 14:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.988 14:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:22.988 14:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.988 14:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.988 14:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.988 14:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:22.988 14:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.988 14:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.923 [2024-07-12 14:30:15.778304] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:23.923 [2024-07-12 14:30:15.778323] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:23.923 [2024-07-12 14:30:15.778334] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:23.923 [2024-07-12 14:30:15.867617] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:24.182 [2024-07-12 14:30:15.971498] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:24.182 [2024-07-12 14:30:15.971542] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:24.182 [2024-07-12 14:30:15.971561] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:24.182 [2024-07-12 14:30:15.971574] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:24.182 [2024-07-12 14:30:15.971592] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:24.182 14:30:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.182 14:30:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:24.182 14:30:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.182 14:30:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.182 14:30:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.182 [2024-07-12 14:30:15.977254] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xab5e30 was disconnected and freed. delete nvme_qpair. 00:25:24.182 14:30:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.182 14:30:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.182 14:30:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.182 14:30:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.182 14:30:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.182 14:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:24.182 14:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:24.182 14:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:24.182 14:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:24.182 14:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.182 14:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.182 14:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.182 14:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.182 14:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.182 14:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.182 14:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.182 14:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.182 14:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:24.182 14:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:25.559 14:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.559 14:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.559 14:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.559 14:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.559 14:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.559 14:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.559 14:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.559 14:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.559 14:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:25.559 14:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:26.496 14:30:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:26.496 14:30:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:26.496 14:30:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.496 14:30:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:26.496 14:30:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.496 14:30:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:26.496 14:30:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:26.496 14:30:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.496 14:30:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:26.496 14:30:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:27.433 14:30:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:27.433 14:30:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.433 14:30:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:27.433 14:30:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.433 14:30:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:27.433 14:30:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.433 14:30:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:27.433 14:30:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.433 14:30:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:27.433 14:30:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:28.370 14:30:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:28.370 14:30:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.370 14:30:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:28.370 14:30:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.370 14:30:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:28.371 14:30:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:28.371 14:30:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:28.371 14:30:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.629 14:30:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:28.629 14:30:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:29.567 14:30:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:29.567 14:30:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.567 14:30:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:29.567 14:30:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.567 14:30:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:29.567 14:30:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:29.567 14:30:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:29.567 14:30:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.567 [2024-07-12 14:30:21.422980] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:29.567 [2024-07-12 14:30:21.423017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.567 [2024-07-12 14:30:21.423028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.567 [2024-07-12 14:30:21.423037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.567 [2024-07-12 14:30:21.423044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.567 [2024-07-12 14:30:21.423051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.567 [2024-07-12 14:30:21.423059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.567 [2024-07-12 14:30:21.423066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.567 [2024-07-12 14:30:21.423073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.567 [2024-07-12 14:30:21.423080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.567 [2024-07-12 14:30:21.423086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.567 [2024-07-12 14:30:21.423093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c690 is same with the state(5) to be set 00:25:29.567 14:30:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:29.567 14:30:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:29.567 [2024-07-12 14:30:21.433001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7c690 (9): Bad file descriptor 00:25:29.567 [2024-07-12 14:30:21.443038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:30.505 14:30:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:30.505 14:30:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:30.505 14:30:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.505 14:30:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:30.505 14:30:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.505 14:30:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.505 14:30:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:30.505 [2024-07-12 14:30:22.506399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:30.505 [2024-07-12 14:30:22.506453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa7c690 with addr=10.0.0.2, port=4420 00:25:30.505 [2024-07-12 14:30:22.506475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c690 is same with the state(5) to be set 00:25:30.505 [2024-07-12 14:30:22.506503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7c690 (9): Bad file descriptor 00:25:30.505 [2024-07-12 14:30:22.506916] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:30.505 [2024-07-12 14:30:22.506936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:30.505 [2024-07-12 14:30:22.506945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:30.505 [2024-07-12 14:30:22.506954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:30.505 [2024-07-12 14:30:22.506973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.505 [2024-07-12 14:30:22.506983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:30.764 14:30:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.764 14:30:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:30.764 14:30:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:31.700 [2024-07-12 14:30:23.509465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.700 [2024-07-12 14:30:23.509486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.700 [2024-07-12 14:30:23.509493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.700 [2024-07-12 14:30:23.509500] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:31.700 [2024-07-12 14:30:23.509527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.701 [2024-07-12 14:30:23.509545] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:31.701 [2024-07-12 14:30:23.509566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.701 [2024-07-12 14:30:23.509574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.701 [2024-07-12 14:30:23.509584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.701 [2024-07-12 14:30:23.509591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.701 [2024-07-12 14:30:23.509598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.701 [2024-07-12 14:30:23.509604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.701 [2024-07-12 14:30:23.509612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.701 [2024-07-12 14:30:23.509618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.701 [2024-07-12 14:30:23.509625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.701 [2024-07-12 14:30:23.509631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.701 [2024-07-12 14:30:23.509648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:31.701 [2024-07-12 14:30:23.509707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7ba80 (9): Bad file descriptor 00:25:31.701 [2024-07-12 14:30:23.510717] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:31.701 [2024-07-12 14:30:23.510727] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:31.701 14:30:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:33.080 14:30:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:33.080 14:30:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.080 14:30:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:33.080 14:30:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.080 14:30:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:33.080 14:30:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.080 14:30:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:33.080 14:30:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.080 14:30:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:33.080 14:30:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:33.648 [2024-07-12 14:30:25.564894] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:33.648 [2024-07-12 14:30:25.564910] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:33.648 [2024-07-12 14:30:25.564922] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:33.908 [2024-07-12 14:30:25.693324] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:33.908 14:30:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:33.908 14:30:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.908 14:30:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:33.908 14:30:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.908 14:30:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:33.908 14:30:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.908 14:30:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:33.908 14:30:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.908 14:30:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:33.908 14:30:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:33.908 [2024-07-12 14:30:25.876928] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:33.908 [2024-07-12 14:30:25.876962] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:33.908 [2024-07-12 14:30:25.876980] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:33.908 [2024-07-12 14:30:25.876993] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:33.908 [2024-07-12 14:30:25.876999] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:33.908 [2024-07-12 14:30:25.882676] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xa928d0 was disconnected and freed. delete nvme_qpair. 00:25:34.846 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:34.846 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.846 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:34.846 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:34.846 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.846 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.846 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:34.846 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.105 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:35.105 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:35.105 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2664031 00:25:35.105 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2664031 ']' 00:25:35.105 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2664031 00:25:35.105 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:35.105 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:35.105 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2664031 00:25:35.105 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:35.105 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:35.105 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2664031' 00:25:35.105 killing process with pid 2664031 00:25:35.105 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2664031 00:25:35.105 14:30:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2664031 00:25:35.105 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:35.105 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:35.105 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:35.105 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:35.105 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:35.105 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:35.105 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:35.105 rmmod nvme_tcp 00:25:35.105 rmmod nvme_fabrics 00:25:35.365 rmmod nvme_keyring 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2663792 ']' 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2663792 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2663792 ']' 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2663792 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2663792 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2663792' 00:25:35.365 killing process with pid 2663792 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2663792 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2663792 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:35.365 14:30:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.903 14:30:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:37.903 00:25:37.903 real 0m21.934s 00:25:37.903 user 0m28.661s 00:25:37.903 sys 0m5.199s 00:25:37.903 14:30:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:37.903 14:30:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.903 ************************************ 00:25:37.903 END TEST nvmf_discovery_remove_ifc 00:25:37.903 ************************************ 00:25:37.903 14:30:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:37.903 14:30:29 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:37.903 14:30:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:37.903 14:30:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:37.903 14:30:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:37.903 ************************************ 00:25:37.903 START TEST nvmf_identify_kernel_target 00:25:37.903 ************************************ 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:37.903 * Looking for test storage... 00:25:37.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:37.903 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:37.904 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:37.904 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.904 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.904 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.904 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:37.904 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:37.904 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:37.904 14:30:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:43.209 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:43.209 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:43.209 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:43.210 Found net devices under 0000:86:00.0: cvl_0_0 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:43.210 Found net devices under 0000:86:00.1: cvl_0_1 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:43.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:43.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:25:43.210 00:25:43.210 --- 10.0.0.2 ping statistics --- 00:25:43.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.210 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:43.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:43.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:25:43.210 00:25:43.210 --- 10.0.0.1 ping statistics --- 00:25:43.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.210 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:43.210 14:30:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:45.745 Waiting for block devices as requested 00:25:45.745 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:45.745 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:45.745 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:45.745 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:45.745 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:45.745 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:45.745 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:46.005 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:46.005 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:46.005 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:46.005 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:46.264 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:46.264 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:46.264 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:46.264 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:46.524 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:46.524 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:46.524 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:46.524 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:46.524 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:46.524 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:46.524 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:46.524 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:46.524 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:46.524 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:46.524 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:46.784 No valid GPT data, bailing 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:46.784 00:25:46.784 Discovery Log Number of Records 2, Generation counter 2 00:25:46.784 =====Discovery Log Entry 0====== 00:25:46.784 trtype: tcp 00:25:46.784 adrfam: ipv4 00:25:46.784 subtype: current discovery subsystem 00:25:46.784 treq: not specified, sq flow control disable supported 00:25:46.784 portid: 1 00:25:46.784 trsvcid: 4420 00:25:46.784 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:46.784 traddr: 10.0.0.1 00:25:46.784 eflags: none 00:25:46.784 sectype: none 00:25:46.784 =====Discovery Log Entry 1====== 00:25:46.784 trtype: tcp 00:25:46.784 adrfam: ipv4 00:25:46.784 subtype: nvme subsystem 00:25:46.784 treq: not specified, sq flow control disable supported 00:25:46.784 portid: 1 00:25:46.784 trsvcid: 4420 00:25:46.784 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:46.784 traddr: 10.0.0.1 00:25:46.784 eflags: none 00:25:46.784 sectype: none 00:25:46.784 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:46.784 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:46.784 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.784 ===================================================== 00:25:46.784 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:46.784 ===================================================== 00:25:46.784 Controller Capabilities/Features 00:25:46.784 ================================ 00:25:46.784 Vendor ID: 0000 00:25:46.784 Subsystem Vendor ID: 0000 00:25:46.784 Serial Number: 1f825eafb704871a23f7 00:25:46.784 Model Number: Linux 00:25:46.784 Firmware Version: 6.7.0-68 00:25:46.784 Recommended Arb Burst: 0 00:25:46.784 IEEE OUI Identifier: 00 00 00 00:25:46.784 Multi-path I/O 00:25:46.784 May have multiple subsystem ports: No 00:25:46.784 May have multiple controllers: No 00:25:46.784 Associated with SR-IOV VF: No 00:25:46.784 Max Data Transfer Size: Unlimited 00:25:46.784 Max Number of Namespaces: 0 00:25:46.784 Max Number of I/O Queues: 1024 00:25:46.784 NVMe Specification Version (VS): 1.3 00:25:46.784 NVMe Specification Version (Identify): 1.3 00:25:46.784 Maximum Queue Entries: 1024 00:25:46.784 Contiguous Queues Required: No 00:25:46.784 Arbitration Mechanisms Supported 00:25:46.784 Weighted Round Robin: Not Supported 00:25:46.784 Vendor Specific: Not Supported 00:25:46.784 Reset Timeout: 7500 ms 00:25:46.784 Doorbell Stride: 4 bytes 00:25:46.784 NVM Subsystem Reset: Not Supported 00:25:46.784 Command Sets Supported 00:25:46.784 NVM Command Set: Supported 00:25:46.784 Boot Partition: Not Supported 00:25:46.784 Memory Page Size Minimum: 4096 bytes 00:25:46.784 Memory Page Size Maximum: 4096 bytes 00:25:46.784 Persistent Memory Region: Not Supported 00:25:46.784 Optional Asynchronous Events Supported 00:25:46.784 Namespace Attribute Notices: Not Supported 00:25:46.784 Firmware Activation Notices: Not Supported 00:25:46.784 ANA Change Notices: Not Supported 00:25:46.784 PLE Aggregate Log Change Notices: Not Supported 00:25:46.784 LBA Status Info Alert Notices: Not Supported 00:25:46.784 EGE Aggregate Log Change Notices: Not Supported 00:25:46.784 Normal NVM Subsystem Shutdown event: Not Supported 00:25:46.784 Zone Descriptor Change Notices: Not Supported 00:25:46.784 Discovery Log Change Notices: Supported 00:25:46.784 Controller Attributes 00:25:46.784 128-bit Host Identifier: Not Supported 00:25:46.784 Non-Operational Permissive Mode: Not Supported 00:25:46.784 NVM Sets: Not Supported 00:25:46.784 Read Recovery Levels: Not Supported 00:25:46.784 Endurance Groups: Not Supported 00:25:46.784 Predictable Latency Mode: Not Supported 00:25:46.784 Traffic Based Keep ALive: Not Supported 00:25:46.784 Namespace Granularity: Not Supported 00:25:46.784 SQ Associations: Not Supported 00:25:46.784 UUID List: Not Supported 00:25:46.785 Multi-Domain Subsystem: Not Supported 00:25:46.785 Fixed Capacity Management: Not Supported 00:25:46.785 Variable Capacity Management: Not Supported 00:25:46.785 Delete Endurance Group: Not Supported 00:25:46.785 Delete NVM Set: Not Supported 00:25:46.785 Extended LBA Formats Supported: Not Supported 00:25:46.785 Flexible Data Placement Supported: Not Supported 00:25:46.785 00:25:46.785 Controller Memory Buffer Support 00:25:46.785 ================================ 00:25:46.785 Supported: No 00:25:46.785 00:25:46.785 Persistent Memory Region Support 00:25:46.785 ================================ 00:25:46.785 Supported: No 00:25:46.785 00:25:46.785 Admin Command Set Attributes 00:25:46.785 ============================ 00:25:46.785 Security Send/Receive: Not Supported 00:25:46.785 Format NVM: Not Supported 00:25:46.785 Firmware Activate/Download: Not Supported 00:25:46.785 Namespace Management: Not Supported 00:25:46.785 Device Self-Test: Not Supported 00:25:46.785 Directives: Not Supported 00:25:46.785 NVMe-MI: Not Supported 00:25:46.785 Virtualization Management: Not Supported 00:25:46.785 Doorbell Buffer Config: Not Supported 00:25:46.785 Get LBA Status Capability: Not Supported 00:25:46.785 Command & Feature Lockdown Capability: Not Supported 00:25:46.785 Abort Command Limit: 1 00:25:46.785 Async Event Request Limit: 1 00:25:46.785 Number of Firmware Slots: N/A 00:25:46.785 Firmware Slot 1 Read-Only: N/A 00:25:46.785 Firmware Activation Without Reset: N/A 00:25:46.785 Multiple Update Detection Support: N/A 00:25:46.785 Firmware Update Granularity: No Information Provided 00:25:46.785 Per-Namespace SMART Log: No 00:25:46.785 Asymmetric Namespace Access Log Page: Not Supported 00:25:46.785 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:46.785 Command Effects Log Page: Not Supported 00:25:46.785 Get Log Page Extended Data: Supported 00:25:46.785 Telemetry Log Pages: Not Supported 00:25:46.785 Persistent Event Log Pages: Not Supported 00:25:46.785 Supported Log Pages Log Page: May Support 00:25:46.785 Commands Supported & Effects Log Page: Not Supported 00:25:46.785 Feature Identifiers & Effects Log Page:May Support 00:25:46.785 NVMe-MI Commands & Effects Log Page: May Support 00:25:46.785 Data Area 4 for Telemetry Log: Not Supported 00:25:46.785 Error Log Page Entries Supported: 1 00:25:46.785 Keep Alive: Not Supported 00:25:46.785 00:25:46.785 NVM Command Set Attributes 00:25:46.785 ========================== 00:25:46.785 Submission Queue Entry Size 00:25:46.785 Max: 1 00:25:46.785 Min: 1 00:25:46.785 Completion Queue Entry Size 00:25:46.785 Max: 1 00:25:46.785 Min: 1 00:25:46.785 Number of Namespaces: 0 00:25:46.785 Compare Command: Not Supported 00:25:46.785 Write Uncorrectable Command: Not Supported 00:25:46.785 Dataset Management Command: Not Supported 00:25:46.785 Write Zeroes Command: Not Supported 00:25:46.785 Set Features Save Field: Not Supported 00:25:46.785 Reservations: Not Supported 00:25:46.785 Timestamp: Not Supported 00:25:46.785 Copy: Not Supported 00:25:46.785 Volatile Write Cache: Not Present 00:25:46.785 Atomic Write Unit (Normal): 1 00:25:46.785 Atomic Write Unit (PFail): 1 00:25:46.785 Atomic Compare & Write Unit: 1 00:25:46.785 Fused Compare & Write: Not Supported 00:25:46.785 Scatter-Gather List 00:25:46.785 SGL Command Set: Supported 00:25:46.785 SGL Keyed: Not Supported 00:25:46.785 SGL Bit Bucket Descriptor: Not Supported 00:25:46.785 SGL Metadata Pointer: Not Supported 00:25:46.785 Oversized SGL: Not Supported 00:25:46.785 SGL Metadata Address: Not Supported 00:25:46.785 SGL Offset: Supported 00:25:46.785 Transport SGL Data Block: Not Supported 00:25:46.785 Replay Protected Memory Block: Not Supported 00:25:46.785 00:25:46.785 Firmware Slot Information 00:25:46.785 ========================= 00:25:46.785 Active slot: 0 00:25:46.785 00:25:46.785 00:25:46.785 Error Log 00:25:46.785 ========= 00:25:46.785 00:25:46.785 Active Namespaces 00:25:46.785 ================= 00:25:46.785 Discovery Log Page 00:25:46.785 ================== 00:25:46.785 Generation Counter: 2 00:25:46.785 Number of Records: 2 00:25:46.785 Record Format: 0 00:25:46.785 00:25:46.785 Discovery Log Entry 0 00:25:46.785 ---------------------- 00:25:46.785 Transport Type: 3 (TCP) 00:25:46.785 Address Family: 1 (IPv4) 00:25:46.785 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:46.785 Entry Flags: 00:25:46.785 Duplicate Returned Information: 0 00:25:46.785 Explicit Persistent Connection Support for Discovery: 0 00:25:46.785 Transport Requirements: 00:25:46.785 Secure Channel: Not Specified 00:25:46.785 Port ID: 1 (0x0001) 00:25:46.785 Controller ID: 65535 (0xffff) 00:25:46.785 Admin Max SQ Size: 32 00:25:46.785 Transport Service Identifier: 4420 00:25:46.785 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:46.785 Transport Address: 10.0.0.1 00:25:46.785 Discovery Log Entry 1 00:25:46.785 ---------------------- 00:25:46.785 Transport Type: 3 (TCP) 00:25:46.785 Address Family: 1 (IPv4) 00:25:46.785 Subsystem Type: 2 (NVM Subsystem) 00:25:46.785 Entry Flags: 00:25:46.785 Duplicate Returned Information: 0 00:25:46.785 Explicit Persistent Connection Support for Discovery: 0 00:25:46.785 Transport Requirements: 00:25:46.785 Secure Channel: Not Specified 00:25:46.785 Port ID: 1 (0x0001) 00:25:46.785 Controller ID: 65535 (0xffff) 00:25:46.785 Admin Max SQ Size: 32 00:25:46.785 Transport Service Identifier: 4420 00:25:46.785 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:46.785 Transport Address: 10.0.0.1 00:25:46.785 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:46.785 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.044 get_feature(0x01) failed 00:25:47.044 get_feature(0x02) failed 00:25:47.044 get_feature(0x04) failed 00:25:47.044 ===================================================== 00:25:47.044 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:47.044 ===================================================== 00:25:47.044 Controller Capabilities/Features 00:25:47.044 ================================ 00:25:47.044 Vendor ID: 0000 00:25:47.044 Subsystem Vendor ID: 0000 00:25:47.044 Serial Number: 5f692a834f7c60a2193c 00:25:47.044 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:47.044 Firmware Version: 6.7.0-68 00:25:47.044 Recommended Arb Burst: 6 00:25:47.044 IEEE OUI Identifier: 00 00 00 00:25:47.044 Multi-path I/O 00:25:47.044 May have multiple subsystem ports: Yes 00:25:47.044 May have multiple controllers: Yes 00:25:47.044 Associated with SR-IOV VF: No 00:25:47.044 Max Data Transfer Size: Unlimited 00:25:47.044 Max Number of Namespaces: 1024 00:25:47.044 Max Number of I/O Queues: 128 00:25:47.044 NVMe Specification Version (VS): 1.3 00:25:47.044 NVMe Specification Version (Identify): 1.3 00:25:47.044 Maximum Queue Entries: 1024 00:25:47.044 Contiguous Queues Required: No 00:25:47.044 Arbitration Mechanisms Supported 00:25:47.044 Weighted Round Robin: Not Supported 00:25:47.045 Vendor Specific: Not Supported 00:25:47.045 Reset Timeout: 7500 ms 00:25:47.045 Doorbell Stride: 4 bytes 00:25:47.045 NVM Subsystem Reset: Not Supported 00:25:47.045 Command Sets Supported 00:25:47.045 NVM Command Set: Supported 00:25:47.045 Boot Partition: Not Supported 00:25:47.045 Memory Page Size Minimum: 4096 bytes 00:25:47.045 Memory Page Size Maximum: 4096 bytes 00:25:47.045 Persistent Memory Region: Not Supported 00:25:47.045 Optional Asynchronous Events Supported 00:25:47.045 Namespace Attribute Notices: Supported 00:25:47.045 Firmware Activation Notices: Not Supported 00:25:47.045 ANA Change Notices: Supported 00:25:47.045 PLE Aggregate Log Change Notices: Not Supported 00:25:47.045 LBA Status Info Alert Notices: Not Supported 00:25:47.045 EGE Aggregate Log Change Notices: Not Supported 00:25:47.045 Normal NVM Subsystem Shutdown event: Not Supported 00:25:47.045 Zone Descriptor Change Notices: Not Supported 00:25:47.045 Discovery Log Change Notices: Not Supported 00:25:47.045 Controller Attributes 00:25:47.045 128-bit Host Identifier: Supported 00:25:47.045 Non-Operational Permissive Mode: Not Supported 00:25:47.045 NVM Sets: Not Supported 00:25:47.045 Read Recovery Levels: Not Supported 00:25:47.045 Endurance Groups: Not Supported 00:25:47.045 Predictable Latency Mode: Not Supported 00:25:47.045 Traffic Based Keep ALive: Supported 00:25:47.045 Namespace Granularity: Not Supported 00:25:47.045 SQ Associations: Not Supported 00:25:47.045 UUID List: Not Supported 00:25:47.045 Multi-Domain Subsystem: Not Supported 00:25:47.045 Fixed Capacity Management: Not Supported 00:25:47.045 Variable Capacity Management: Not Supported 00:25:47.045 Delete Endurance Group: Not Supported 00:25:47.045 Delete NVM Set: Not Supported 00:25:47.045 Extended LBA Formats Supported: Not Supported 00:25:47.045 Flexible Data Placement Supported: Not Supported 00:25:47.045 00:25:47.045 Controller Memory Buffer Support 00:25:47.045 ================================ 00:25:47.045 Supported: No 00:25:47.045 00:25:47.045 Persistent Memory Region Support 00:25:47.045 ================================ 00:25:47.045 Supported: No 00:25:47.045 00:25:47.045 Admin Command Set Attributes 00:25:47.045 ============================ 00:25:47.045 Security Send/Receive: Not Supported 00:25:47.045 Format NVM: Not Supported 00:25:47.045 Firmware Activate/Download: Not Supported 00:25:47.045 Namespace Management: Not Supported 00:25:47.045 Device Self-Test: Not Supported 00:25:47.045 Directives: Not Supported 00:25:47.045 NVMe-MI: Not Supported 00:25:47.045 Virtualization Management: Not Supported 00:25:47.045 Doorbell Buffer Config: Not Supported 00:25:47.045 Get LBA Status Capability: Not Supported 00:25:47.045 Command & Feature Lockdown Capability: Not Supported 00:25:47.045 Abort Command Limit: 4 00:25:47.045 Async Event Request Limit: 4 00:25:47.045 Number of Firmware Slots: N/A 00:25:47.045 Firmware Slot 1 Read-Only: N/A 00:25:47.045 Firmware Activation Without Reset: N/A 00:25:47.045 Multiple Update Detection Support: N/A 00:25:47.045 Firmware Update Granularity: No Information Provided 00:25:47.045 Per-Namespace SMART Log: Yes 00:25:47.045 Asymmetric Namespace Access Log Page: Supported 00:25:47.045 ANA Transition Time : 10 sec 00:25:47.045 00:25:47.045 Asymmetric Namespace Access Capabilities 00:25:47.045 ANA Optimized State : Supported 00:25:47.045 ANA Non-Optimized State : Supported 00:25:47.045 ANA Inaccessible State : Supported 00:25:47.045 ANA Persistent Loss State : Supported 00:25:47.045 ANA Change State : Supported 00:25:47.045 ANAGRPID is not changed : No 00:25:47.045 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:47.045 00:25:47.045 ANA Group Identifier Maximum : 128 00:25:47.045 Number of ANA Group Identifiers : 128 00:25:47.045 Max Number of Allowed Namespaces : 1024 00:25:47.045 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:47.045 Command Effects Log Page: Supported 00:25:47.045 Get Log Page Extended Data: Supported 00:25:47.045 Telemetry Log Pages: Not Supported 00:25:47.045 Persistent Event Log Pages: Not Supported 00:25:47.045 Supported Log Pages Log Page: May Support 00:25:47.045 Commands Supported & Effects Log Page: Not Supported 00:25:47.045 Feature Identifiers & Effects Log Page:May Support 00:25:47.045 NVMe-MI Commands & Effects Log Page: May Support 00:25:47.045 Data Area 4 for Telemetry Log: Not Supported 00:25:47.045 Error Log Page Entries Supported: 128 00:25:47.045 Keep Alive: Supported 00:25:47.045 Keep Alive Granularity: 1000 ms 00:25:47.045 00:25:47.045 NVM Command Set Attributes 00:25:47.045 ========================== 00:25:47.045 Submission Queue Entry Size 00:25:47.045 Max: 64 00:25:47.045 Min: 64 00:25:47.045 Completion Queue Entry Size 00:25:47.045 Max: 16 00:25:47.045 Min: 16 00:25:47.045 Number of Namespaces: 1024 00:25:47.045 Compare Command: Not Supported 00:25:47.045 Write Uncorrectable Command: Not Supported 00:25:47.045 Dataset Management Command: Supported 00:25:47.045 Write Zeroes Command: Supported 00:25:47.045 Set Features Save Field: Not Supported 00:25:47.045 Reservations: Not Supported 00:25:47.045 Timestamp: Not Supported 00:25:47.045 Copy: Not Supported 00:25:47.045 Volatile Write Cache: Present 00:25:47.045 Atomic Write Unit (Normal): 1 00:25:47.045 Atomic Write Unit (PFail): 1 00:25:47.045 Atomic Compare & Write Unit: 1 00:25:47.045 Fused Compare & Write: Not Supported 00:25:47.045 Scatter-Gather List 00:25:47.045 SGL Command Set: Supported 00:25:47.045 SGL Keyed: Not Supported 00:25:47.045 SGL Bit Bucket Descriptor: Not Supported 00:25:47.045 SGL Metadata Pointer: Not Supported 00:25:47.045 Oversized SGL: Not Supported 00:25:47.045 SGL Metadata Address: Not Supported 00:25:47.045 SGL Offset: Supported 00:25:47.045 Transport SGL Data Block: Not Supported 00:25:47.045 Replay Protected Memory Block: Not Supported 00:25:47.045 00:25:47.045 Firmware Slot Information 00:25:47.045 ========================= 00:25:47.045 Active slot: 0 00:25:47.045 00:25:47.045 Asymmetric Namespace Access 00:25:47.045 =========================== 00:25:47.045 Change Count : 0 00:25:47.045 Number of ANA Group Descriptors : 1 00:25:47.045 ANA Group Descriptor : 0 00:25:47.045 ANA Group ID : 1 00:25:47.045 Number of NSID Values : 1 00:25:47.045 Change Count : 0 00:25:47.045 ANA State : 1 00:25:47.045 Namespace Identifier : 1 00:25:47.045 00:25:47.045 Commands Supported and Effects 00:25:47.045 ============================== 00:25:47.045 Admin Commands 00:25:47.045 -------------- 00:25:47.045 Get Log Page (02h): Supported 00:25:47.045 Identify (06h): Supported 00:25:47.045 Abort (08h): Supported 00:25:47.045 Set Features (09h): Supported 00:25:47.045 Get Features (0Ah): Supported 00:25:47.045 Asynchronous Event Request (0Ch): Supported 00:25:47.045 Keep Alive (18h): Supported 00:25:47.045 I/O Commands 00:25:47.045 ------------ 00:25:47.045 Flush (00h): Supported 00:25:47.045 Write (01h): Supported LBA-Change 00:25:47.045 Read (02h): Supported 00:25:47.045 Write Zeroes (08h): Supported LBA-Change 00:25:47.045 Dataset Management (09h): Supported 00:25:47.045 00:25:47.045 Error Log 00:25:47.045 ========= 00:25:47.045 Entry: 0 00:25:47.045 Error Count: 0x3 00:25:47.045 Submission Queue Id: 0x0 00:25:47.045 Command Id: 0x5 00:25:47.045 Phase Bit: 0 00:25:47.045 Status Code: 0x2 00:25:47.045 Status Code Type: 0x0 00:25:47.045 Do Not Retry: 1 00:25:47.045 Error Location: 0x28 00:25:47.045 LBA: 0x0 00:25:47.045 Namespace: 0x0 00:25:47.045 Vendor Log Page: 0x0 00:25:47.045 ----------- 00:25:47.045 Entry: 1 00:25:47.045 Error Count: 0x2 00:25:47.045 Submission Queue Id: 0x0 00:25:47.045 Command Id: 0x5 00:25:47.045 Phase Bit: 0 00:25:47.045 Status Code: 0x2 00:25:47.045 Status Code Type: 0x0 00:25:47.045 Do Not Retry: 1 00:25:47.045 Error Location: 0x28 00:25:47.045 LBA: 0x0 00:25:47.045 Namespace: 0x0 00:25:47.045 Vendor Log Page: 0x0 00:25:47.045 ----------- 00:25:47.045 Entry: 2 00:25:47.045 Error Count: 0x1 00:25:47.045 Submission Queue Id: 0x0 00:25:47.045 Command Id: 0x4 00:25:47.045 Phase Bit: 0 00:25:47.045 Status Code: 0x2 00:25:47.045 Status Code Type: 0x0 00:25:47.045 Do Not Retry: 1 00:25:47.045 Error Location: 0x28 00:25:47.045 LBA: 0x0 00:25:47.045 Namespace: 0x0 00:25:47.045 Vendor Log Page: 0x0 00:25:47.045 00:25:47.045 Number of Queues 00:25:47.045 ================ 00:25:47.045 Number of I/O Submission Queues: 128 00:25:47.045 Number of I/O Completion Queues: 128 00:25:47.046 00:25:47.046 ZNS Specific Controller Data 00:25:47.046 ============================ 00:25:47.046 Zone Append Size Limit: 0 00:25:47.046 00:25:47.046 00:25:47.046 Active Namespaces 00:25:47.046 ================= 00:25:47.046 get_feature(0x05) failed 00:25:47.046 Namespace ID:1 00:25:47.046 Command Set Identifier: NVM (00h) 00:25:47.046 Deallocate: Supported 00:25:47.046 Deallocated/Unwritten Error: Not Supported 00:25:47.046 Deallocated Read Value: Unknown 00:25:47.046 Deallocate in Write Zeroes: Not Supported 00:25:47.046 Deallocated Guard Field: 0xFFFF 00:25:47.046 Flush: Supported 00:25:47.046 Reservation: Not Supported 00:25:47.046 Namespace Sharing Capabilities: Multiple Controllers 00:25:47.046 Size (in LBAs): 1953525168 (931GiB) 00:25:47.046 Capacity (in LBAs): 1953525168 (931GiB) 00:25:47.046 Utilization (in LBAs): 1953525168 (931GiB) 00:25:47.046 UUID: 80f0a7b0-23be-44e5-a6e6-ddda7d96021b 00:25:47.046 Thin Provisioning: Not Supported 00:25:47.046 Per-NS Atomic Units: Yes 00:25:47.046 Atomic Boundary Size (Normal): 0 00:25:47.046 Atomic Boundary Size (PFail): 0 00:25:47.046 Atomic Boundary Offset: 0 00:25:47.046 NGUID/EUI64 Never Reused: No 00:25:47.046 ANA group ID: 1 00:25:47.046 Namespace Write Protected: No 00:25:47.046 Number of LBA Formats: 1 00:25:47.046 Current LBA Format: LBA Format #00 00:25:47.046 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:47.046 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:47.046 rmmod nvme_tcp 00:25:47.046 rmmod nvme_fabrics 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:47.046 14:30:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.953 14:30:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:48.953 14:30:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:48.953 14:30:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:48.953 14:30:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:48.953 14:30:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:48.953 14:30:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:48.953 14:30:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:49.212 14:30:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:49.212 14:30:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:49.212 14:30:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:49.212 14:30:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:51.748 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:51.748 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:51.748 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:51.748 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:51.748 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:51.748 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:51.748 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:51.748 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:51.748 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:51.748 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:51.748 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:51.748 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:51.748 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:51.748 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:51.748 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:51.748 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:52.317 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:52.317 00:25:52.317 real 0m14.728s 00:25:52.317 user 0m3.528s 00:25:52.317 sys 0m7.462s 00:25:52.317 14:30:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:52.317 14:30:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:52.317 ************************************ 00:25:52.317 END TEST nvmf_identify_kernel_target 00:25:52.317 ************************************ 00:25:52.317 14:30:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:52.317 14:30:44 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:52.317 14:30:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:52.317 14:30:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:52.317 14:30:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:52.317 ************************************ 00:25:52.317 START TEST nvmf_auth_host 00:25:52.317 ************************************ 00:25:52.317 14:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:52.576 * Looking for test storage... 00:25:52.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:52.576 14:30:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:57.847 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:57.847 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:57.847 Found net devices under 0000:86:00.0: cvl_0_0 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:57.847 Found net devices under 0000:86:00.1: cvl_0_1 00:25:57.847 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:57.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:25:57.848 00:25:57.848 --- 10.0.0.2 ping statistics --- 00:25:57.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.848 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:25:57.848 00:25:57.848 --- 10.0.0.1 ping statistics --- 00:25:57.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.848 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2675773 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2675773 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2675773 ']' 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:57.848 14:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9b1b410d3b88611543482a4fc0070ae6 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kE3 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9b1b410d3b88611543482a4fc0070ae6 0 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9b1b410d3b88611543482a4fc0070ae6 0 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9b1b410d3b88611543482a4fc0070ae6 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kE3 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kE3 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.kE3 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5c5aab058603c178c442e8b61b7e55632928706095bd6f49fd580187abb7b652 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.HHV 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5c5aab058603c178c442e8b61b7e55632928706095bd6f49fd580187abb7b652 3 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5c5aab058603c178c442e8b61b7e55632928706095bd6f49fd580187abb7b652 3 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5c5aab058603c178c442e8b61b7e55632928706095bd6f49fd580187abb7b652 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:58.414 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.HHV 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.HHV 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.HHV 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b6c341eaa990432c38f9f6730a583045aa59a95a87fd44cb 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.E9o 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b6c341eaa990432c38f9f6730a583045aa59a95a87fd44cb 0 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b6c341eaa990432c38f9f6730a583045aa59a95a87fd44cb 0 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b6c341eaa990432c38f9f6730a583045aa59a95a87fd44cb 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.E9o 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.E9o 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.E9o 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=998e8be0696c473908a06ea8877d13402b76de6c6a3ffecb 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.adU 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 998e8be0696c473908a06ea8877d13402b76de6c6a3ffecb 2 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 998e8be0696c473908a06ea8877d13402b76de6c6a3ffecb 2 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=998e8be0696c473908a06ea8877d13402b76de6c6a3ffecb 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.adU 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.adU 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.adU 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a2b8addad6711bc85cf1e249cd6b13ba 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.alH 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a2b8addad6711bc85cf1e249cd6b13ba 1 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a2b8addad6711bc85cf1e249cd6b13ba 1 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a2b8addad6711bc85cf1e249cd6b13ba 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.alH 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.alH 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.alH 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=93525325ea21d697973b9c2e97dcd321 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.gCK 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 93525325ea21d697973b9c2e97dcd321 1 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 93525325ea21d697973b9c2e97dcd321 1 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=93525325ea21d697973b9c2e97dcd321 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:58.678 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.gCK 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.gCK 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.gCK 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f0abb294950c1050dfe1616d237fcde9316bcdab8c10cc54 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Sai 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f0abb294950c1050dfe1616d237fcde9316bcdab8c10cc54 2 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f0abb294950c1050dfe1616d237fcde9316bcdab8c10cc54 2 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f0abb294950c1050dfe1616d237fcde9316bcdab8c10cc54 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Sai 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Sai 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Sai 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=585f8549f86b4a1b8623744cdc3e6047 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kZJ 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 585f8549f86b4a1b8623744cdc3e6047 0 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 585f8549f86b4a1b8623744cdc3e6047 0 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=585f8549f86b4a1b8623744cdc3e6047 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kZJ 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kZJ 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.kZJ 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=780472235c568b555dfc07bec19142775f6eced2c05047a42387289a9e03e3f3 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.PTl 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 780472235c568b555dfc07bec19142775f6eced2c05047a42387289a9e03e3f3 3 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 780472235c568b555dfc07bec19142775f6eced2c05047a42387289a9e03e3f3 3 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=780472235c568b555dfc07bec19142775f6eced2c05047a42387289a9e03e3f3 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.PTl 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.PTl 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.PTl 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2675773 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2675773 ']' 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:58.937 14:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kE3 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.HHV ]] 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.HHV 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.E9o 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.adU ]] 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.adU 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.alH 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.gCK ]] 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gCK 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Sai 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.kZJ ]] 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.kZJ 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.PTl 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.196 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:59.197 14:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:01.727 Waiting for block devices as requested 00:26:01.986 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:01.986 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:01.986 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:02.244 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:02.244 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:02.244 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:02.244 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:02.503 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:02.503 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:02.503 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:02.503 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:02.762 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:02.762 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:02.762 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:03.021 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:03.021 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:03.021 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:03.588 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:03.588 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:03.588 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:03.588 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:03.588 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:03.588 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:03.588 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:03.588 14:30:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:03.588 14:30:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:03.848 No valid GPT data, bailing 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:03.848 00:26:03.848 Discovery Log Number of Records 2, Generation counter 2 00:26:03.848 =====Discovery Log Entry 0====== 00:26:03.848 trtype: tcp 00:26:03.848 adrfam: ipv4 00:26:03.848 subtype: current discovery subsystem 00:26:03.848 treq: not specified, sq flow control disable supported 00:26:03.848 portid: 1 00:26:03.848 trsvcid: 4420 00:26:03.848 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:03.848 traddr: 10.0.0.1 00:26:03.848 eflags: none 00:26:03.848 sectype: none 00:26:03.848 =====Discovery Log Entry 1====== 00:26:03.848 trtype: tcp 00:26:03.848 adrfam: ipv4 00:26:03.848 subtype: nvme subsystem 00:26:03.848 treq: not specified, sq flow control disable supported 00:26:03.848 portid: 1 00:26:03.848 trsvcid: 4420 00:26:03.848 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:03.848 traddr: 10.0.0.1 00:26:03.848 eflags: none 00:26:03.848 sectype: none 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.848 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.108 nvme0n1 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.108 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: ]] 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.109 14:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.109 nvme0n1 00:26:04.109 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.369 nvme0n1 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.369 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: ]] 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.628 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.629 nvme0n1 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: ]] 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.629 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.889 nvme0n1 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.889 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.148 nvme0n1 00:26:05.148 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.148 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.148 14:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.148 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.148 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.148 14:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: ]] 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.148 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.149 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.149 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.407 nvme0n1 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.407 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.408 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.408 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.408 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.408 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.697 nvme0n1 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: ]] 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.697 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.964 nvme0n1 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: ]] 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:05.964 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.965 nvme0n1 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.965 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.224 14:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.224 nvme0n1 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.224 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: ]] 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.225 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.484 nvme0n1 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.484 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.743 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.002 nvme0n1 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: ]] 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.002 14:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.003 14:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.003 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.003 14:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.262 nvme0n1 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: ]] 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.262 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.521 nvme0n1 00:26:07.521 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.522 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.781 nvme0n1 00:26:07.781 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.781 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.781 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.781 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.781 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.781 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.781 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.781 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.781 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.781 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: ]] 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.040 14:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.300 nvme0n1 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.300 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.868 nvme0n1 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: ]] 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.868 14:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.127 nvme0n1 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: ]] 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.127 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.695 nvme0n1 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.695 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.953 nvme0n1 00:26:09.954 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.954 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.954 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.954 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.954 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.954 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.954 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.954 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.954 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.954 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.211 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: ]] 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.212 14:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.778 nvme0n1 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.778 14:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.344 nvme0n1 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: ]] 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.344 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.931 nvme0n1 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: ]] 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.931 14:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.496 nvme0n1 00:26:12.496 14:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.496 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.496 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.496 14:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.496 14:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.754 14:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.320 nvme0n1 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: ]] 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.320 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.321 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.321 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.321 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.321 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.321 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.321 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.321 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.321 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.321 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.321 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.579 nvme0n1 00:26:13.579 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.579 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.580 nvme0n1 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.580 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: ]] 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.839 nvme0n1 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: ]] 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.839 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.098 nvme0n1 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.098 14:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.098 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.098 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.098 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.098 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.098 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.098 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.098 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.098 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:14.098 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.098 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.098 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.099 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.358 nvme0n1 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: ]] 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.358 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.618 nvme0n1 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.618 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.877 nvme0n1 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: ]] 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:14.877 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.878 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.136 nvme0n1 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: ]] 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.136 14:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.394 nvme0n1 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.394 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.395 nvme0n1 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.395 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: ]] 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.653 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.912 nvme0n1 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.912 14:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.171 nvme0n1 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: ]] 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.171 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.429 nvme0n1 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: ]] 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.429 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.430 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.430 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.430 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.430 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.430 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.430 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.430 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.430 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.430 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.430 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.430 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:16.430 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.430 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.688 nvme0n1 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.688 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.946 nvme0n1 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.946 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: ]] 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.205 14:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.205 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.205 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.205 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.205 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.205 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.205 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.205 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.205 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.205 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.205 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.205 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.205 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.205 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.463 nvme0n1 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.463 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.464 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.464 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.464 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.464 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.464 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.464 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.464 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.464 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.464 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.464 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.464 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.464 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:17.464 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.464 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.031 nvme0n1 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: ]] 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.031 14:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.290 nvme0n1 00:26:18.290 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.290 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.290 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.290 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.290 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.290 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: ]] 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.549 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.807 nvme0n1 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:18.807 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.808 14:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.374 nvme0n1 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: ]] 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.374 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.941 nvme0n1 00:26:19.941 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.941 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.941 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.941 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.941 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.941 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.941 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.941 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.941 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.941 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.941 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.941 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.941 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:19.941 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.941 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.942 14:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.510 nvme0n1 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: ]] 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.510 14:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.078 nvme0n1 00:26:21.078 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.078 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.078 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.078 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.078 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.078 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: ]] 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.336 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.903 nvme0n1 00:26:21.903 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.903 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.903 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.904 14:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.502 nvme0n1 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: ]] 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.502 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.503 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.503 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.503 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.503 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.503 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.503 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.761 nvme0n1 00:26:22.761 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.761 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.761 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.761 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.761 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.762 nvme0n1 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.762 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: ]] 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.021 nvme0n1 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.021 14:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: ]] 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:23.021 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.022 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:23.022 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.022 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.022 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.022 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.022 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.022 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.022 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.022 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.022 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.022 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.022 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.022 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.022 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.022 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.281 nvme0n1 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:23.281 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.282 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.541 nvme0n1 00:26:23.541 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.541 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.541 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.541 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.541 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.541 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.541 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.541 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.541 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.541 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.541 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: ]] 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.542 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.801 nvme0n1 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.801 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.061 nvme0n1 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: ]] 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.061 14:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.320 nvme0n1 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: ]] 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.320 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.321 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.321 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.579 nvme0n1 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.579 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.580 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.839 nvme0n1 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: ]] 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.839 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.098 nvme0n1 00:26:25.098 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.098 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.098 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.098 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.099 14:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.359 nvme0n1 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: ]] 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.359 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.618 nvme0n1 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: ]] 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.618 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.877 nvme0n1 00:26:25.877 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.877 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.877 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.877 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.877 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.877 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.877 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.877 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.877 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.877 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.136 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.137 14:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.137 nvme0n1 00:26:26.137 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: ]] 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.395 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.652 nvme0n1 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.652 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.653 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.910 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.910 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.910 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.910 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.910 14:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.910 14:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.910 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.910 14:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.168 nvme0n1 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: ]] 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.168 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.735 nvme0n1 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: ]] 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.735 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.993 nvme0n1 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.993 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.994 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.994 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.994 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.994 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.994 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.994 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.994 14:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.994 14:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.994 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.994 14:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.561 nvme0n1 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIxYjQxMGQzYjg4NjExNTQzNDgyYTRmYzAwNzBhZTZdr/Cg: 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: ]] 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWM1YWFiMDU4NjAzYzE3OGM0NDJlOGI2MWI3ZTU1NjMyOTI4NzA2MDk1YmQ2ZjQ5ZmQ1ODAxODdhYmI3YjY1MsRYPew=: 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.561 14:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.129 nvme0n1 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.129 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.130 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.130 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.130 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.695 nvme0n1 00:26:29.695 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.695 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.695 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.695 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.695 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.695 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTJiOGFkZGFkNjcxMWJjODVjZjFlMjQ5Y2Q2YjEzYmEqFaCK: 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: ]] 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1MjUzMjVlYTIxZDY5Nzk3M2I5YzJlOTdkY2QzMjHmYW02: 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.954 14:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.521 nvme0n1 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBhYmIyOTQ5NTBjMTA1MGRmZTE2MTZkMjM3ZmNkZTkzMTZiY2RhYjhjMTBjYzU0W4B+UA==: 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: ]] 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1Zjg1NDlmODZiNGExYjg2MjM3NDRjZGMzZTYwNDfAq0LN: 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.521 14:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.522 14:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.522 14:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.522 14:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.522 14:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.522 14:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.522 14:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.522 14:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.522 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:30.522 14:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.522 14:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.088 nvme0n1 00:26:31.088 14:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.088 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.088 14:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.088 14:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.088 14:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.088 14:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNDcyMjM1YzU2OGI1NTVkZmMwN2JlYzE5MTQyNzc1ZjZlY2VkMmMwNTA0N2E0MjM4NzI4OWE5ZTAzZTNmM5C+E5E=: 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.088 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.655 nvme0n1 00:26:31.655 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.655 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.655 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.655 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.655 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.655 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZjMzQxZWFhOTkwNDMyYzM4ZjlmNjczMGE1ODMwNDVhYTU5YTk1YTg3ZmQ0NGNiOzhKXg==: 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: ]] 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTk4ZThiZTA2OTZjNDczOTA4YTA2ZWE4ODc3ZDEzNDAyYjc2ZGU2YzZhM2ZmZWNiq10t4w==: 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:31.914 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.915 request: 00:26:31.915 { 00:26:31.915 "name": "nvme0", 00:26:31.915 "trtype": "tcp", 00:26:31.915 "traddr": "10.0.0.1", 00:26:31.915 "adrfam": "ipv4", 00:26:31.915 "trsvcid": "4420", 00:26:31.915 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:31.915 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:31.915 "prchk_reftag": false, 00:26:31.915 "prchk_guard": false, 00:26:31.915 "hdgst": false, 00:26:31.915 "ddgst": false, 00:26:31.915 "method": "bdev_nvme_attach_controller", 00:26:31.915 "req_id": 1 00:26:31.915 } 00:26:31.915 Got JSON-RPC error response 00:26:31.915 response: 00:26:31.915 { 00:26:31.915 "code": -5, 00:26:31.915 "message": "Input/output error" 00:26:31.915 } 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.915 request: 00:26:31.915 { 00:26:31.915 "name": "nvme0", 00:26:31.915 "trtype": "tcp", 00:26:31.915 "traddr": "10.0.0.1", 00:26:31.915 "adrfam": "ipv4", 00:26:31.915 "trsvcid": "4420", 00:26:31.915 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:31.915 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:31.915 "prchk_reftag": false, 00:26:31.915 "prchk_guard": false, 00:26:31.915 "hdgst": false, 00:26:31.915 "ddgst": false, 00:26:31.915 "dhchap_key": "key2", 00:26:31.915 "method": "bdev_nvme_attach_controller", 00:26:31.915 "req_id": 1 00:26:31.915 } 00:26:31.915 Got JSON-RPC error response 00:26:31.915 response: 00:26:31.915 { 00:26:31.915 "code": -5, 00:26:31.915 "message": "Input/output error" 00:26:31.915 } 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.915 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.174 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.174 request: 00:26:32.174 { 00:26:32.174 "name": "nvme0", 00:26:32.174 "trtype": "tcp", 00:26:32.174 "traddr": "10.0.0.1", 00:26:32.174 "adrfam": "ipv4", 00:26:32.174 "trsvcid": "4420", 00:26:32.174 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:32.174 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:32.174 "prchk_reftag": false, 00:26:32.174 "prchk_guard": false, 00:26:32.174 "hdgst": false, 00:26:32.174 "ddgst": false, 00:26:32.174 "dhchap_key": "key1", 00:26:32.175 "dhchap_ctrlr_key": "ckey2", 00:26:32.175 "method": "bdev_nvme_attach_controller", 00:26:32.175 "req_id": 1 00:26:32.175 } 00:26:32.175 Got JSON-RPC error response 00:26:32.175 response: 00:26:32.175 { 00:26:32.175 "code": -5, 00:26:32.175 "message": "Input/output error" 00:26:32.175 } 00:26:32.175 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:32.175 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:32.175 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:32.175 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:32.175 14:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:32.175 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:32.175 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:32.175 14:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:32.175 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:32.175 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:32.175 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:32.175 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:32.175 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:32.175 14:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:32.175 rmmod nvme_tcp 00:26:32.175 rmmod nvme_fabrics 00:26:32.175 14:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:32.175 14:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:32.175 14:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:32.175 14:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2675773 ']' 00:26:32.175 14:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2675773 00:26:32.175 14:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2675773 ']' 00:26:32.175 14:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2675773 00:26:32.175 14:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:26:32.175 14:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:32.175 14:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2675773 00:26:32.175 14:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:32.175 14:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:32.175 14:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2675773' 00:26:32.175 killing process with pid 2675773 00:26:32.175 14:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2675773 00:26:32.175 14:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2675773 00:26:32.433 14:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:32.433 14:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:32.433 14:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:32.433 14:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:32.433 14:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:32.433 14:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.433 14:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:32.433 14:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.338 14:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:34.338 14:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:34.338 14:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:34.338 14:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:34.338 14:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:34.338 14:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:34.338 14:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:34.338 14:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:34.596 14:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:34.596 14:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:34.596 14:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:34.596 14:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:34.596 14:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:37.132 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:37.132 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:37.132 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:37.132 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:37.132 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:37.132 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:37.132 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:37.132 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:37.132 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:37.132 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:37.132 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:37.132 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:37.132 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:37.132 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:37.132 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:37.132 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:38.069 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:38.069 14:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kE3 /tmp/spdk.key-null.E9o /tmp/spdk.key-sha256.alH /tmp/spdk.key-sha384.Sai /tmp/spdk.key-sha512.PTl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:38.069 14:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:40.611 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:40.611 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:40.611 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:40.611 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:40.611 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:40.611 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:40.611 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:40.611 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:40.611 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:40.611 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:40.611 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:40.611 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:40.612 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:40.612 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:40.612 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:40.612 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:40.612 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:40.612 00:26:40.612 real 0m48.076s 00:26:40.612 user 0m43.232s 00:26:40.612 sys 0m11.156s 00:26:40.612 14:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:40.612 14:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.612 ************************************ 00:26:40.612 END TEST nvmf_auth_host 00:26:40.612 ************************************ 00:26:40.612 14:31:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:40.612 14:31:32 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:26:40.612 14:31:32 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:40.612 14:31:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:40.612 14:31:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.612 14:31:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:40.612 ************************************ 00:26:40.612 START TEST nvmf_digest 00:26:40.612 ************************************ 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:40.612 * Looking for test storage... 00:26:40.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:40.612 14:31:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:45.942 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:45.942 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.942 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:45.942 Found net devices under 0000:86:00.0: cvl_0_0 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:45.943 Found net devices under 0000:86:00.1: cvl_0_1 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.943 14:31:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:45.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:26:45.943 00:26:45.943 --- 10.0.0.2 ping statistics --- 00:26:45.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.943 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:26:45.943 00:26:45.943 --- 10.0.0.1 ping statistics --- 00:26:45.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.943 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:45.943 ************************************ 00:26:45.943 START TEST nvmf_digest_clean 00:26:45.943 ************************************ 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2688686 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2688686 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2688686 ']' 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:45.943 14:31:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:45.943 [2024-07-12 14:31:37.272054] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:26:45.943 [2024-07-12 14:31:37.272095] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.943 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.943 [2024-07-12 14:31:37.328947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.943 [2024-07-12 14:31:37.407222] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.943 [2024-07-12 14:31:37.407255] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.943 [2024-07-12 14:31:37.407262] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.943 [2024-07-12 14:31:37.407268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.943 [2024-07-12 14:31:37.407273] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.943 [2024-07-12 14:31:37.407289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.202 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:46.202 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:46.202 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:46.202 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:46.202 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:46.202 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.202 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:46.202 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:46.202 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:46.202 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.202 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:46.202 null0 00:26:46.202 [2024-07-12 14:31:38.198845] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.461 [2024-07-12 14:31:38.223001] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.461 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.461 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:46.461 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:46.461 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:46.461 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:46.461 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:46.462 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:46.462 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:46.462 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2688832 00:26:46.462 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2688832 /var/tmp/bperf.sock 00:26:46.462 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2688832 ']' 00:26:46.462 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:46.462 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:46.462 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:46.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:46.462 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:46.462 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:46.462 14:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:46.462 [2024-07-12 14:31:38.273005] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:26:46.462 [2024-07-12 14:31:38.273048] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688832 ] 00:26:46.462 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.462 [2024-07-12 14:31:38.327126] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.462 [2024-07-12 14:31:38.399964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.409 14:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:47.409 14:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:47.409 14:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:47.409 14:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:47.409 14:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:47.409 14:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.409 14:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.668 nvme0n1 00:26:47.668 14:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:47.668 14:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:47.668 Running I/O for 2 seconds... 00:26:50.202 00:26:50.202 Latency(us) 00:26:50.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.202 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:50.202 nvme0n1 : 2.01 24712.73 96.53 0.00 0.00 5173.34 2521.71 14474.91 00:26:50.202 =================================================================================================================== 00:26:50.202 Total : 24712.73 96.53 0.00 0.00 5173.34 2521.71 14474.91 00:26:50.202 0 00:26:50.202 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:50.202 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:50.202 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:50.203 | select(.opcode=="crc32c") 00:26:50.203 | "\(.module_name) \(.executed)"' 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2688832 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2688832 ']' 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2688832 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2688832 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2688832' 00:26:50.203 killing process with pid 2688832 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2688832 00:26:50.203 Received shutdown signal, test time was about 2.000000 seconds 00:26:50.203 00:26:50.203 Latency(us) 00:26:50.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.203 =================================================================================================================== 00:26:50.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:50.203 14:31:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2688832 00:26:50.203 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:50.203 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:50.203 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:50.203 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:50.203 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:50.203 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:50.203 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:50.203 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2689526 00:26:50.203 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2689526 /var/tmp/bperf.sock 00:26:50.203 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:50.203 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2689526 ']' 00:26:50.203 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:50.203 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:50.203 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:50.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:50.203 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:50.203 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:50.203 [2024-07-12 14:31:42.133063] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:26:50.203 [2024-07-12 14:31:42.133111] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689526 ] 00:26:50.203 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:50.203 Zero copy mechanism will not be used. 00:26:50.203 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.203 [2024-07-12 14:31:42.187385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.462 [2024-07-12 14:31:42.260302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.030 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:51.030 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:51.030 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:51.030 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:51.030 14:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:51.287 14:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.287 14:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.544 nvme0n1 00:26:51.544 14:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:51.544 14:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:51.803 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:51.803 Zero copy mechanism will not be used. 00:26:51.803 Running I/O for 2 seconds... 00:26:53.707 00:26:53.707 Latency(us) 00:26:53.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.707 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:53.707 nvme0n1 : 2.04 5022.48 627.81 0.00 0.00 3123.52 637.55 43766.65 00:26:53.707 =================================================================================================================== 00:26:53.707 Total : 5022.48 627.81 0.00 0.00 3123.52 637.55 43766.65 00:26:53.707 0 00:26:53.707 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:53.707 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:53.707 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:53.707 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:53.707 | select(.opcode=="crc32c") 00:26:53.707 | "\(.module_name) \(.executed)"' 00:26:53.707 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:53.965 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:53.965 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:53.965 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:53.965 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:53.965 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2689526 00:26:53.965 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2689526 ']' 00:26:53.965 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2689526 00:26:53.965 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:53.965 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:53.965 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2689526 00:26:53.965 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:53.965 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:53.965 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2689526' 00:26:53.965 killing process with pid 2689526 00:26:53.965 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2689526 00:26:53.966 Received shutdown signal, test time was about 2.000000 seconds 00:26:53.966 00:26:53.966 Latency(us) 00:26:53.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.966 =================================================================================================================== 00:26:53.966 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:53.966 14:31:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2689526 00:26:54.224 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:54.224 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:54.224 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:54.224 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:54.224 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:54.224 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:54.224 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:54.224 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2690222 00:26:54.224 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2690222 /var/tmp/bperf.sock 00:26:54.224 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2690222 ']' 00:26:54.224 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:54.224 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:54.224 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:54.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:54.224 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:54.224 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:54.224 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:54.224 [2024-07-12 14:31:46.146903] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:26:54.224 [2024-07-12 14:31:46.146949] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690222 ] 00:26:54.224 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.224 [2024-07-12 14:31:46.201438] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.483 [2024-07-12 14:31:46.281491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.051 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.051 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:55.051 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:55.051 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:55.051 14:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:55.309 14:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.309 14:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.568 nvme0n1 00:26:55.568 14:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:55.568 14:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:55.568 Running I/O for 2 seconds... 00:26:58.104 00:26:58.104 Latency(us) 00:26:58.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.104 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:58.104 nvme0n1 : 2.00 27223.27 106.34 0.00 0.00 4693.55 2051.56 6496.61 00:26:58.104 =================================================================================================================== 00:26:58.104 Total : 27223.27 106.34 0.00 0.00 4693.55 2051.56 6496.61 00:26:58.104 0 00:26:58.104 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:58.104 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:58.104 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:58.104 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:58.104 | select(.opcode=="crc32c") 00:26:58.104 | "\(.module_name) \(.executed)"' 00:26:58.104 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2690222 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2690222 ']' 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2690222 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2690222 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2690222' 00:26:58.105 killing process with pid 2690222 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2690222 00:26:58.105 Received shutdown signal, test time was about 2.000000 seconds 00:26:58.105 00:26:58.105 Latency(us) 00:26:58.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.105 =================================================================================================================== 00:26:58.105 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2690222 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2690739 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2690739 /var/tmp/bperf.sock 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2690739 ']' 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:58.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:58.105 14:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:58.105 [2024-07-12 14:31:49.989016] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:26:58.105 [2024-07-12 14:31:49.989067] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690739 ] 00:26:58.105 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:58.105 Zero copy mechanism will not be used. 00:26:58.105 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.105 [2024-07-12 14:31:50.044360] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.364 [2024-07-12 14:31:50.131154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.931 14:31:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:58.931 14:31:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:58.931 14:31:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:58.931 14:31:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:58.931 14:31:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:59.190 14:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.190 14:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.448 nvme0n1 00:26:59.448 14:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:59.449 14:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:59.449 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:59.449 Zero copy mechanism will not be used. 00:26:59.449 Running I/O for 2 seconds... 00:27:01.983 00:27:01.983 Latency(us) 00:27:01.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.983 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:01.983 nvme0n1 : 2.00 6381.39 797.67 0.00 0.00 2502.97 1745.25 12993.22 00:27:01.983 =================================================================================================================== 00:27:01.983 Total : 6381.39 797.67 0.00 0.00 2502.97 1745.25 12993.22 00:27:01.983 0 00:27:01.983 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:01.983 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:01.983 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:01.984 | select(.opcode=="crc32c") 00:27:01.984 | "\(.module_name) \(.executed)"' 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2690739 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2690739 ']' 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2690739 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2690739 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2690739' 00:27:01.984 killing process with pid 2690739 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2690739 00:27:01.984 Received shutdown signal, test time was about 2.000000 seconds 00:27:01.984 00:27:01.984 Latency(us) 00:27:01.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.984 =================================================================================================================== 00:27:01.984 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2690739 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2688686 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2688686 ']' 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2688686 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2688686 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2688686' 00:27:01.984 killing process with pid 2688686 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2688686 00:27:01.984 14:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2688686 00:27:02.243 00:27:02.243 real 0m16.786s 00:27:02.243 user 0m32.110s 00:27:02.243 sys 0m4.519s 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:02.243 ************************************ 00:27:02.243 END TEST nvmf_digest_clean 00:27:02.243 ************************************ 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:02.243 ************************************ 00:27:02.243 START TEST nvmf_digest_error 00:27:02.243 ************************************ 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2691431 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2691431 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2691431 ']' 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:02.243 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.243 [2024-07-12 14:31:54.127674] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:27:02.243 [2024-07-12 14:31:54.127716] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.243 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.243 [2024-07-12 14:31:54.183334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.502 [2024-07-12 14:31:54.262800] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.502 [2024-07-12 14:31:54.262832] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.502 [2024-07-12 14:31:54.262839] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.502 [2024-07-12 14:31:54.262845] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.502 [2024-07-12 14:31:54.262850] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.502 [2024-07-12 14:31:54.262866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.069 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:03.069 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:03.069 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:03.069 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:03.069 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.069 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:03.069 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:03.069 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.069 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.069 [2024-07-12 14:31:54.972924] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:03.069 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.069 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:03.069 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:03.069 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.069 14:31:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.069 null0 00:27:03.069 [2024-07-12 14:31:55.062402] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.328 [2024-07-12 14:31:55.086563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.328 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.328 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:03.328 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:03.328 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:03.328 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:03.328 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:03.328 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2691671 00:27:03.328 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2691671 /var/tmp/bperf.sock 00:27:03.328 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:03.328 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2691671 ']' 00:27:03.328 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:03.328 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:03.328 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:03.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:03.328 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:03.328 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.328 [2024-07-12 14:31:55.135908] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:27:03.328 [2024-07-12 14:31:55.135949] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691671 ] 00:27:03.328 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.328 [2024-07-12 14:31:55.189750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.328 [2024-07-12 14:31:55.264589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.261 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:04.261 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:04.261 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:04.261 14:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:04.261 14:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:04.261 14:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.261 14:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.261 14:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.261 14:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.261 14:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.519 nvme0n1 00:27:04.519 14:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:04.519 14:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.519 14:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.519 14:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.519 14:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:04.519 14:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:04.833 Running I/O for 2 seconds... 00:27:04.833 [2024-07-12 14:31:56.583162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.583196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-12 14:31:56.583206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-12 14:31:56.592382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.592408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-12 14:31:56.592418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-12 14:31:56.604106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.604127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-12 14:31:56.604135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-12 14:31:56.613144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.613163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-12 14:31:56.613171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-12 14:31:56.623530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.623551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-12 14:31:56.623559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-12 14:31:56.635164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.635184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-12 14:31:56.635193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-12 14:31:56.643669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.643688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-12 14:31:56.643696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-12 14:31:56.653986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.654006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-12 14:31:56.654018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-12 14:31:56.663350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.663370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-12 14:31:56.663383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-12 14:31:56.674200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.674219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-12 14:31:56.674227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-12 14:31:56.682962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.682982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-12 14:31:56.682990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-12 14:31:56.694938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.694957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-12 14:31:56.694965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-12 14:31:56.703561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.703584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-12 14:31:56.703592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-12 14:31:56.716258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.716277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-12 14:31:56.716285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-12 14:31:56.727980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.728000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-12 14:31:56.728008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-12 14:31:56.739782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.739801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.833 [2024-07-12 14:31:56.739810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.833 [2024-07-12 14:31:56.748515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.833 [2024-07-12 14:31:56.748535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.834 [2024-07-12 14:31:56.748543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.834 [2024-07-12 14:31:56.758141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.834 [2024-07-12 14:31:56.758160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.834 [2024-07-12 14:31:56.758168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.834 [2024-07-12 14:31:56.767996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.834 [2024-07-12 14:31:56.768015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.834 [2024-07-12 14:31:56.768023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.834 [2024-07-12 14:31:56.777076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.834 [2024-07-12 14:31:56.777095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.834 [2024-07-12 14:31:56.777103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.834 [2024-07-12 14:31:56.787090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.834 [2024-07-12 14:31:56.787109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.834 [2024-07-12 14:31:56.787117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.834 [2024-07-12 14:31:56.795793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.834 [2024-07-12 14:31:56.795812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.834 [2024-07-12 14:31:56.795820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.834 [2024-07-12 14:31:56.805127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.834 [2024-07-12 14:31:56.805146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.834 [2024-07-12 14:31:56.805154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.834 [2024-07-12 14:31:56.815997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:04.834 [2024-07-12 14:31:56.816017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.834 [2024-07-12 14:31:56.816024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.825436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.825456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.825468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.837096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.837117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.837126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.846524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.846544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.846552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.855925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.855944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.855952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.864357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.864381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.864390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.875245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.875265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.875272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.884582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.884601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.884609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.894337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.894357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.894364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.902664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.902682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.902690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.914547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.914570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.914578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.925610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.925629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.925637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.934509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.934527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.934535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.945069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.945089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.945097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.953329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.953348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.953356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.964142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.964162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.964170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.974617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.974636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.974644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.984281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.984300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.984308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:56.992296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:56.992315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:56.992323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:57.002952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:57.002973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:57.002981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:57.012582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:57.012602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:57.012610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:57.021742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:57.021763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:57.021770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:57.034836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.093 [2024-07-12 14:31:57.034858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.093 [2024-07-12 14:31:57.034866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.093 [2024-07-12 14:31:57.046100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.094 [2024-07-12 14:31:57.046121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.094 [2024-07-12 14:31:57.046131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.094 [2024-07-12 14:31:57.054743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.094 [2024-07-12 14:31:57.054764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.094 [2024-07-12 14:31:57.054771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.094 [2024-07-12 14:31:57.067930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.094 [2024-07-12 14:31:57.067950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.094 [2024-07-12 14:31:57.067959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.094 [2024-07-12 14:31:57.077143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.094 [2024-07-12 14:31:57.077162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.094 [2024-07-12 14:31:57.077170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.094 [2024-07-12 14:31:57.086619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.094 [2024-07-12 14:31:57.086639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.094 [2024-07-12 14:31:57.086652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.094 [2024-07-12 14:31:57.095507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.094 [2024-07-12 14:31:57.095527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.094 [2024-07-12 14:31:57.095535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.352 [2024-07-12 14:31:57.105731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.352 [2024-07-12 14:31:57.105752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.105760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.115417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.115437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.115445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.125335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.125354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.125362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.133487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.133507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.133515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.144052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.144072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.144080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.153517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.153537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.153544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.163900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.163919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.163927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.171562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.171582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.171590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.181387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.181407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.181415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.192645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.192665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.192673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.201374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.201399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.201407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.214173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.214192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.214200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.222461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.222481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.222489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.233396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.233416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.233424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.246024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.246045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.246053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.258490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.258511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.258522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.270205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.270225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.270233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.278956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.278976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.278983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.290890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.290910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.290918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.299109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.299129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.299137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.311123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.311144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.311152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.323669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.323690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.323697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.331788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.331808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.331816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.342556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.342576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.342584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.353 [2024-07-12 14:31:57.353737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.353 [2024-07-12 14:31:57.353761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.353 [2024-07-12 14:31:57.353769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.363099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.363119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.363126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.375611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.375630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.375638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.388094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.388113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.388121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.399730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.399750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.399757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.408747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.408767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.408774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.420999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.421020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.421028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.432525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.432545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.432553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.441720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.441739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.441747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.453394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.453414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.453422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.466301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.466321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.466329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.477596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.477615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.477623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.486227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.486247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.486254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.498446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.498465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.498473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.510765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.510784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.510791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.523887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.523907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.523914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.536824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.536844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.536851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.612 [2024-07-12 14:31:57.544893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.612 [2024-07-12 14:31:57.544912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.612 [2024-07-12 14:31:57.544923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.613 [2024-07-12 14:31:57.556749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.613 [2024-07-12 14:31:57.556769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.613 [2024-07-12 14:31:57.556776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.613 [2024-07-12 14:31:57.569440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.613 [2024-07-12 14:31:57.569460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.613 [2024-07-12 14:31:57.569468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.613 [2024-07-12 14:31:57.581007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.613 [2024-07-12 14:31:57.581026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.613 [2024-07-12 14:31:57.581034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.613 [2024-07-12 14:31:57.590074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.613 [2024-07-12 14:31:57.590094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.613 [2024-07-12 14:31:57.590101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.613 [2024-07-12 14:31:57.602181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.613 [2024-07-12 14:31:57.602203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.613 [2024-07-12 14:31:57.602210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.613 [2024-07-12 14:31:57.614550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.613 [2024-07-12 14:31:57.614571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.613 [2024-07-12 14:31:57.614579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.626936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.626957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.626965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.640111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.640131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.640139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.650251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.650270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.650278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.658843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.658861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.658869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.669189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.669209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.669217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.680285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.680304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.680312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.689772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.689791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.689799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.699159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.699179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.699186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.711194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.711214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.711221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.719945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.719964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.719972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.732828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.732847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.732858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.741361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.741385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.741393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.751983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.752002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.752009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.764780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.764799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.764806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.773266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.773285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.773292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.785231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.785250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.785258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.793580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.793599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.793606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.806176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.806195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.806203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.818196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.818216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.818223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.830650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.830673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.830681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.842654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.842673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.842681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.854789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.854810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.854817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.863930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.863949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.863957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.873 [2024-07-12 14:31:57.876279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:05.873 [2024-07-12 14:31:57.876299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.873 [2024-07-12 14:31:57.876306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.132 [2024-07-12 14:31:57.889199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.132 [2024-07-12 14:31:57.889218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.132 [2024-07-12 14:31:57.889225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.132 [2024-07-12 14:31:57.901657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.132 [2024-07-12 14:31:57.901677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.132 [2024-07-12 14:31:57.901684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.132 [2024-07-12 14:31:57.913017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.132 [2024-07-12 14:31:57.913036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:57.913043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:57.921666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:57.921684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:57.921692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:57.934410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:57.934430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:57.934438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:57.946041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:57.946061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:57.946069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:57.954631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:57.954651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:57.954659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:57.965949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:57.965969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:57.965977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:57.974411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:57.974431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:57.974439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:57.984462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:57.984482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:57.984490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:57.996747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:57.996766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:57.996774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:58.005437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:58.005457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:58.005464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:58.016129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:58.016147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:58.016158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:58.028321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:58.028341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:58.028348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:58.038398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:58.038418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:58.038426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:58.047781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:58.047800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:58.047808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:58.058294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:58.058314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:58.058322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:58.068545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:58.068565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:58.068572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:58.078257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:58.078276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:58.078283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:58.086144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:58.086164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:58.086172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:58.096283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:58.096303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:58.096311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:58.106446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:58.106469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:58.106477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:58.115073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:58.115093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:58.115101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:58.126112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:58.126133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:58.126140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.133 [2024-07-12 14:31:58.138918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.133 [2024-07-12 14:31:58.138940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.133 [2024-07-12 14:31:58.138947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.151737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.151757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.151766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.162852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.162871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.162878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.171769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.171789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.171797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.183272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.183291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.183299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.194064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.194084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.194095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.201823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.201843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.201851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.212120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.212139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.212147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.222309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.222329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.222337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.232458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.232477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.232485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.240940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.240959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.240967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.252227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.252246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.252253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.260596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.260615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.260623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.272090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.272109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.272117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.282625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.282648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.282656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.294617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.294637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.294645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.304333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.304352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.304360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.392 [2024-07-12 14:31:58.312330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.392 [2024-07-12 14:31:58.312350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.392 [2024-07-12 14:31:58.312357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.393 [2024-07-12 14:31:58.322999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.393 [2024-07-12 14:31:58.323018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.393 [2024-07-12 14:31:58.323025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.393 [2024-07-12 14:31:58.332627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.393 [2024-07-12 14:31:58.332645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.393 [2024-07-12 14:31:58.332653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.393 [2024-07-12 14:31:58.341530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.393 [2024-07-12 14:31:58.341549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.393 [2024-07-12 14:31:58.341557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.393 [2024-07-12 14:31:58.352938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.393 [2024-07-12 14:31:58.352957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.393 [2024-07-12 14:31:58.352965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.393 [2024-07-12 14:31:58.363096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.393 [2024-07-12 14:31:58.363116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.393 [2024-07-12 14:31:58.363124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.393 [2024-07-12 14:31:58.373468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.393 [2024-07-12 14:31:58.373488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.393 [2024-07-12 14:31:58.373496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.393 [2024-07-12 14:31:58.382286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.393 [2024-07-12 14:31:58.382305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.393 [2024-07-12 14:31:58.382313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.393 [2024-07-12 14:31:58.393695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.393 [2024-07-12 14:31:58.393715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.393 [2024-07-12 14:31:58.393723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.405822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.405841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.405849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.415519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.415539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.415547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.424809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.424828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.424836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.436273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.436294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.436302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.446874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.446894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.446902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.455740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.455760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.455774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.465776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.465797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.465805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.475229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.475249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.475257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.484090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.484110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.484117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.492707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.492728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.492736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.503096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.503116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.503124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.514659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.514679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.514687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.522643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.522663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.522671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.533609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.533629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.533637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.544546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.544569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.544576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.553083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.553103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.553110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 [2024-07-12 14:31:58.564417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x173ff20) 00:27:06.652 [2024-07-12 14:31:58.564437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.652 [2024-07-12 14:31:58.564445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.652 00:27:06.652 Latency(us) 00:27:06.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.652 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:06.652 nvme0n1 : 2.00 24382.50 95.24 0.00 0.00 5244.26 2535.96 19375.86 00:27:06.652 =================================================================================================================== 00:27:06.652 Total : 24382.50 95.24 0.00 0.00 5244.26 2535.96 19375.86 00:27:06.652 0 00:27:06.652 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:06.652 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:06.652 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:06.652 | .driver_specific 00:27:06.652 | .nvme_error 00:27:06.652 | .status_code 00:27:06.652 | .command_transient_transport_error' 00:27:06.652 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:06.911 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 191 > 0 )) 00:27:06.911 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2691671 00:27:06.911 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2691671 ']' 00:27:06.911 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2691671 00:27:06.911 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:06.911 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:06.911 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2691671 00:27:06.911 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:06.911 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:06.911 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2691671' 00:27:06.911 killing process with pid 2691671 00:27:06.911 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2691671 00:27:06.911 Received shutdown signal, test time was about 2.000000 seconds 00:27:06.911 00:27:06.911 Latency(us) 00:27:06.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.911 =================================================================================================================== 00:27:06.911 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:06.911 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2691671 00:27:07.170 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:07.170 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:07.170 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:07.170 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:07.170 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:07.170 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2692372 00:27:07.170 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2692372 /var/tmp/bperf.sock 00:27:07.170 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:07.170 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2692372 ']' 00:27:07.170 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:07.170 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:07.170 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:07.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:07.170 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:07.170 14:31:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.170 [2024-07-12 14:31:59.033254] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:27:07.170 [2024-07-12 14:31:59.033305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692372 ] 00:27:07.170 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:07.170 Zero copy mechanism will not be used. 00:27:07.170 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.170 [2024-07-12 14:31:59.088359] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.170 [2024-07-12 14:31:59.156161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.107 14:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:08.107 14:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:08.107 14:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:08.107 14:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:08.107 14:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:08.107 14:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.107 14:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.107 14:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.107 14:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:08.107 14:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:08.365 nvme0n1 00:27:08.365 14:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:08.365 14:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.365 14:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.365 14:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.365 14:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:08.365 14:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:08.365 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:08.365 Zero copy mechanism will not be used. 00:27:08.365 Running I/O for 2 seconds... 00:27:08.365 [2024-07-12 14:32:00.365241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.365 [2024-07-12 14:32:00.365277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.365 [2024-07-12 14:32:00.365287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.365 [2024-07-12 14:32:00.371604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.365 [2024-07-12 14:32:00.371628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.365 [2024-07-12 14:32:00.371636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.623 [2024-07-12 14:32:00.377954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.623 [2024-07-12 14:32:00.377974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.623 [2024-07-12 14:32:00.377981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.623 [2024-07-12 14:32:00.384141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.623 [2024-07-12 14:32:00.384161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.623 [2024-07-12 14:32:00.384170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.623 [2024-07-12 14:32:00.390171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.623 [2024-07-12 14:32:00.390190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.623 [2024-07-12 14:32:00.390198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.623 [2024-07-12 14:32:00.396025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.623 [2024-07-12 14:32:00.396045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.623 [2024-07-12 14:32:00.396053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.623 [2024-07-12 14:32:00.401876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.623 [2024-07-12 14:32:00.401896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.623 [2024-07-12 14:32:00.401908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.623 [2024-07-12 14:32:00.407255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.623 [2024-07-12 14:32:00.407277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.623 [2024-07-12 14:32:00.407285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.623 [2024-07-12 14:32:00.413376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.623 [2024-07-12 14:32:00.413403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.623 [2024-07-12 14:32:00.413412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.419519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.419539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.419547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.425875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.425896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.425904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.431846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.431867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.431875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.437717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.437738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.437746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.443507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.443528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.443536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.449286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.449307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.449315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.455235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.455259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.455267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.461121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.461140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.461149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.466803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.466823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.466831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.472546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.472577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.472585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.477878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.477899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.477907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.483107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.483128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.483136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.488375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.488401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.488409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.493647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.493667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.493674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.498881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.498901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.498909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.504073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.504093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.504101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.509250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.509270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.509277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.515450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.515471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.515479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.522826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.522847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.522855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.529675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.529713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.529722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.537118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.537140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.537149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.544823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.624 [2024-07-12 14:32:00.544844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.624 [2024-07-12 14:32:00.544853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.624 [2024-07-12 14:32:00.553088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.625 [2024-07-12 14:32:00.553111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.625 [2024-07-12 14:32:00.553119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.625 [2024-07-12 14:32:00.561496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.625 [2024-07-12 14:32:00.561518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.625 [2024-07-12 14:32:00.561530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.625 [2024-07-12 14:32:00.569448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.625 [2024-07-12 14:32:00.569469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.625 [2024-07-12 14:32:00.569481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.625 [2024-07-12 14:32:00.577656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.625 [2024-07-12 14:32:00.577678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.625 [2024-07-12 14:32:00.577686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.625 [2024-07-12 14:32:00.585789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.625 [2024-07-12 14:32:00.585812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.625 [2024-07-12 14:32:00.585821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.625 [2024-07-12 14:32:00.593704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.625 [2024-07-12 14:32:00.593727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.625 [2024-07-12 14:32:00.593735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.625 [2024-07-12 14:32:00.601870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.625 [2024-07-12 14:32:00.601892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.625 [2024-07-12 14:32:00.601901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.625 [2024-07-12 14:32:00.610074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.625 [2024-07-12 14:32:00.610095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.625 [2024-07-12 14:32:00.610103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.625 [2024-07-12 14:32:00.618014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.625 [2024-07-12 14:32:00.618036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.625 [2024-07-12 14:32:00.618044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.625 [2024-07-12 14:32:00.626070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.625 [2024-07-12 14:32:00.626092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.625 [2024-07-12 14:32:00.626100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.883 [2024-07-12 14:32:00.633668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.883 [2024-07-12 14:32:00.633690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-07-12 14:32:00.633698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.883 [2024-07-12 14:32:00.641438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.883 [2024-07-12 14:32:00.641458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-07-12 14:32:00.641467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.883 [2024-07-12 14:32:00.648111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.883 [2024-07-12 14:32:00.648131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-07-12 14:32:00.648139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.883 [2024-07-12 14:32:00.654612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.883 [2024-07-12 14:32:00.654633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-07-12 14:32:00.654642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.883 [2024-07-12 14:32:00.660738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.883 [2024-07-12 14:32:00.660758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-07-12 14:32:00.660766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.883 [2024-07-12 14:32:00.666773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.883 [2024-07-12 14:32:00.666794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-07-12 14:32:00.666801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.883 [2024-07-12 14:32:00.673715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.883 [2024-07-12 14:32:00.673737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-07-12 14:32:00.673744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.883 [2024-07-12 14:32:00.681130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.883 [2024-07-12 14:32:00.681152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-07-12 14:32:00.681160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.883 [2024-07-12 14:32:00.688633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.883 [2024-07-12 14:32:00.688654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-07-12 14:32:00.688665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.883 [2024-07-12 14:32:00.696532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.883 [2024-07-12 14:32:00.696554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-07-12 14:32:00.696562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.883 [2024-07-12 14:32:00.704470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.883 [2024-07-12 14:32:00.704492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-07-12 14:32:00.704500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.883 [2024-07-12 14:32:00.712665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.883 [2024-07-12 14:32:00.712687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-07-12 14:32:00.712695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.883 [2024-07-12 14:32:00.720588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.883 [2024-07-12 14:32:00.720608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-07-12 14:32:00.720616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.883 [2024-07-12 14:32:00.725006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.883 [2024-07-12 14:32:00.725027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.883 [2024-07-12 14:32:00.725035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.732162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.732182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.732190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.740405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.740425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.740434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.748323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.748343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.748351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.756337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.756361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.756369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.764230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.764251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.764260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.772727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.772748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.772756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.780765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.780785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.780794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.787634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.787654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.787661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.794448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.794467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.794475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.800614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.800633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.800640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.806764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.806783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.806790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.813330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.813349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.813357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.818488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.818507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.818515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.824235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.824254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.824262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.829997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.830016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.830023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.835716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.835735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.835744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.841247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.841265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.841273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.846821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.846841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.846849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.851752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.851773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.851781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.857191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.857211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.857219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.862521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.862542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.862553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.867829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.867849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.867857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.873088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.873108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.873116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.878392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.878412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.878419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.883699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.883719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.883727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.884 [2024-07-12 14:32:00.889076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:08.884 [2024-07-12 14:32:00.889096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.884 [2024-07-12 14:32:00.889104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.894544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.894565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.894573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.900079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.900099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.900107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.905642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.905662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.905669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.911126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.911150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.911158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.916678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.916698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.916706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.922278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.922298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.922306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.927807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.927827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.927835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.933348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.933368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.933375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.938503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.938523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.938531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.943877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.943897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.943905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.949135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.949156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.949163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.954443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.954463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.954471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.959677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.959697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.959705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.964850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.964869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.964877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.970159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.970179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.970187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.975602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.975622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.975630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.979190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.979209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.979217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.983754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.983774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.983781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.988823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.988842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.988850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.994460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.994481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.994489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:00.999816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.144 [2024-07-12 14:32:00.999836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.144 [2024-07-12 14:32:00.999850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.144 [2024-07-12 14:32:01.005081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.005102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.005110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.010428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.010448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.010456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.015700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.015720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.015728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.021094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.021112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.021121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.026487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.026507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.026515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.032036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.032056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.032064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.037577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.037597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.037604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.042942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.042963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.042970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.048328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.048348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.048355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.053842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.053862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.053870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.059407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.059428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.059435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.065108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.065129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.065137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.070775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.070796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.070803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.076253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.076273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.076281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.081727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.081745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.081753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.087320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.087340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.087348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.092969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.092989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.093000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.098676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.098696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.098703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.104129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.104149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.104156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.109676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.109696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.109704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.115463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.115482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.115490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.121159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.121179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.121186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.126923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.126943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.126951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.132643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.132664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.132672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.138316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.138336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.138344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.144095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.145 [2024-07-12 14:32:01.144119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.145 [2024-07-12 14:32:01.144126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.145 [2024-07-12 14:32:01.149871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.146 [2024-07-12 14:32:01.149891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.146 [2024-07-12 14:32:01.149899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.405 [2024-07-12 14:32:01.155567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.405 [2024-07-12 14:32:01.155588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.405 [2024-07-12 14:32:01.155596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.405 [2024-07-12 14:32:01.161227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.405 [2024-07-12 14:32:01.161247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.405 [2024-07-12 14:32:01.161255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.405 [2024-07-12 14:32:01.166904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.405 [2024-07-12 14:32:01.166924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.405 [2024-07-12 14:32:01.166932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.405 [2024-07-12 14:32:01.172534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.405 [2024-07-12 14:32:01.172553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.405 [2024-07-12 14:32:01.172561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.405 [2024-07-12 14:32:01.178273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.405 [2024-07-12 14:32:01.178292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.405 [2024-07-12 14:32:01.178300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.405 [2024-07-12 14:32:01.183930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.405 [2024-07-12 14:32:01.183950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.405 [2024-07-12 14:32:01.183958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.405 [2024-07-12 14:32:01.189618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.405 [2024-07-12 14:32:01.189639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.405 [2024-07-12 14:32:01.189647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.405 [2024-07-12 14:32:01.195127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.405 [2024-07-12 14:32:01.195147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.405 [2024-07-12 14:32:01.195155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.405 [2024-07-12 14:32:01.200712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.405 [2024-07-12 14:32:01.200732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.405 [2024-07-12 14:32:01.200740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.405 [2024-07-12 14:32:01.206180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.405 [2024-07-12 14:32:01.206199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.405 [2024-07-12 14:32:01.206207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.405 [2024-07-12 14:32:01.211781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.405 [2024-07-12 14:32:01.211801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.405 [2024-07-12 14:32:01.211809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.405 [2024-07-12 14:32:01.217460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.405 [2024-07-12 14:32:01.217479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.405 [2024-07-12 14:32:01.217487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.405 [2024-07-12 14:32:01.223030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.405 [2024-07-12 14:32:01.223050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.405 [2024-07-12 14:32:01.223058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.405 [2024-07-12 14:32:01.228572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.405 [2024-07-12 14:32:01.228592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.405 [2024-07-12 14:32:01.228600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.234200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.234219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.234227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.239882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.239902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.239912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.245413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.245433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.245441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.250897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.250917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.250924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.256362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.256389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.256398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.261833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.261854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.261862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.267272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.267294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.267301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.272764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.272784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.272793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.278227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.278247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.278255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.283631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.283652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.283659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.288970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.288994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.289003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.294445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.294466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.294473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.300028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.300049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.300057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.305807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.305828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.305837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.311648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.311669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.311676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.317414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.317435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.317443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.323023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.323044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.323052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.328696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.328719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.328728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.334409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.334430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.334438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.340030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.340052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.340059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.345543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.345565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.345572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.350975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.350996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.351004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.356652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.356673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.356681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.362598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.362619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.362627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.368758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.368779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.368787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.376276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.376298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.376306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.383924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.383946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.383954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.391542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.391567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.391576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.406 [2024-07-12 14:32:01.400026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.406 [2024-07-12 14:32:01.400049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.406 [2024-07-12 14:32:01.400057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.407 [2024-07-12 14:32:01.408136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.407 [2024-07-12 14:32:01.408158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.407 [2024-07-12 14:32:01.408166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.667 [2024-07-12 14:32:01.416215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.667 [2024-07-12 14:32:01.416237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.667 [2024-07-12 14:32:01.416246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.667 [2024-07-12 14:32:01.424498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.667 [2024-07-12 14:32:01.424520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.667 [2024-07-12 14:32:01.424527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.667 [2024-07-12 14:32:01.432710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.667 [2024-07-12 14:32:01.432732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.667 [2024-07-12 14:32:01.432740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.667 [2024-07-12 14:32:01.441325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.667 [2024-07-12 14:32:01.441346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.667 [2024-07-12 14:32:01.441354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.667 [2024-07-12 14:32:01.449473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.667 [2024-07-12 14:32:01.449494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.667 [2024-07-12 14:32:01.449502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.667 [2024-07-12 14:32:01.458363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.667 [2024-07-12 14:32:01.458391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.667 [2024-07-12 14:32:01.458400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.667 [2024-07-12 14:32:01.466806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.667 [2024-07-12 14:32:01.466828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.667 [2024-07-12 14:32:01.466836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.667 [2024-07-12 14:32:01.475562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.667 [2024-07-12 14:32:01.475584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.667 [2024-07-12 14:32:01.475592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.667 [2024-07-12 14:32:01.483435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.483456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.483464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.491873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.491895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.491903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.500226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.500247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.500256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.508264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.508286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.508294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.515467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.515488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.515497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.522247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.522269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.522277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.529233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.529255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.529267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.535549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.535572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.535580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.542027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.542049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.542057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.547942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.547963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.547971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.553826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.553847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.553855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.559882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.559902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.559911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.565998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.566019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.566027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.571928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.571949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.571956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.577811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.577832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.577840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.583839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.583864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.583871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.589743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.589764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.589771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.595520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.595540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.595548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.601122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.601142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.601151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.606734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.606754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.606761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.612409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.612430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.612438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.618006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.618026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.618034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.623750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.623770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.623778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.629534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.629554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.629562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.635156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.635176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.635184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.640957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.640979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.640987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.646717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.646737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.646745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.652197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.668 [2024-07-12 14:32:01.652218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.668 [2024-07-12 14:32:01.652227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.668 [2024-07-12 14:32:01.657677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.669 [2024-07-12 14:32:01.657698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.669 [2024-07-12 14:32:01.657705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.669 [2024-07-12 14:32:01.663147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.669 [2024-07-12 14:32:01.663168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.669 [2024-07-12 14:32:01.663175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.669 [2024-07-12 14:32:01.668778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.669 [2024-07-12 14:32:01.668799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.669 [2024-07-12 14:32:01.668823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.674491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.674512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.674519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.680138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.680162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.680175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.685746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.685766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.685773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.691417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.691437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.691445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.697306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.697327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.697334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.703186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.703206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.703213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.709263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.709283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.709291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.715023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.715043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.715050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.720761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.720781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.720790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.726592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.726612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.726620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.733013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.733034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.733042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.739513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.739533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.739540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.746931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.746952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.746960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.755103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.755125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.755134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.763114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.763135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.763143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.771907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.771929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.771937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.780116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.780137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.780145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.788205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.788227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.788235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.796552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.796574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.796588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.804179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.804201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.804209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.813532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.813553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.813562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.822036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.822057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.822065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.830522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.830544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.830552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.838835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.838857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.838865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.847423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.847444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.847453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.854295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.854316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.854324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.861303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.861324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.861332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.867880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.867904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.867912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.874158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.874179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.874187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.880214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.880234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.880242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.886097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.886117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.886125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.892323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.892344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.892352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.899027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.899049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.899057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.905548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.905568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.905575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.911909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.911928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.911936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.918444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.918464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.918473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.924845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.924866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.924874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.929 [2024-07-12 14:32:01.931692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:09.929 [2024-07-12 14:32:01.931714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.929 [2024-07-12 14:32:01.931721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.188 [2024-07-12 14:32:01.938426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.188 [2024-07-12 14:32:01.938447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.188 [2024-07-12 14:32:01.938455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.188 [2024-07-12 14:32:01.945540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.188 [2024-07-12 14:32:01.945561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.188 [2024-07-12 14:32:01.945568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.188 [2024-07-12 14:32:01.951504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.188 [2024-07-12 14:32:01.951524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.188 [2024-07-12 14:32:01.951532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.188 [2024-07-12 14:32:01.955298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.188 [2024-07-12 14:32:01.955317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.188 [2024-07-12 14:32:01.955324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.188 [2024-07-12 14:32:01.961822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.188 [2024-07-12 14:32:01.961841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.188 [2024-07-12 14:32:01.961849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.188 [2024-07-12 14:32:01.969394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.188 [2024-07-12 14:32:01.969414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.188 [2024-07-12 14:32:01.969422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.188 [2024-07-12 14:32:01.976589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.188 [2024-07-12 14:32:01.976609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.188 [2024-07-12 14:32:01.976620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.188 [2024-07-12 14:32:01.983815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.188 [2024-07-12 14:32:01.983837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.188 [2024-07-12 14:32:01.983844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.188 [2024-07-12 14:32:01.991151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.188 [2024-07-12 14:32:01.991171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.188 [2024-07-12 14:32:01.991179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.188 [2024-07-12 14:32:01.999138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.188 [2024-07-12 14:32:01.999162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.188 [2024-07-12 14:32:01.999170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.188 [2024-07-12 14:32:02.008042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.188 [2024-07-12 14:32:02.008063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.188 [2024-07-12 14:32:02.008072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.188 [2024-07-12 14:32:02.015917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.188 [2024-07-12 14:32:02.015937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.188 [2024-07-12 14:32:02.015945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.188 [2024-07-12 14:32:02.022853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.188 [2024-07-12 14:32:02.022874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.188 [2024-07-12 14:32:02.022882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.188 [2024-07-12 14:32:02.029656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.188 [2024-07-12 14:32:02.029677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.188 [2024-07-12 14:32:02.029685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.188 [2024-07-12 14:32:02.036579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.188 [2024-07-12 14:32:02.036599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.188 [2024-07-12 14:32:02.036607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.043217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.043241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.043249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.049454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.049474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.049482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.055865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.055885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.055893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.061583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.061603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.061611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.068116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.068136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.068144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.074769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.074789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.074797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.080709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.080729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.080737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.086753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.086774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.086782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.092937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.092957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.092964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.099349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.099369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.099383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.105489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.105510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.105518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.111217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.111237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.111245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.117156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.117177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.117184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.123066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.123086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.123094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.128800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.128820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.128828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.134749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.134769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.134776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.141040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.141060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.141068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.147154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.147178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.147186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.152811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.152833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.152840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.158712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.158732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.158740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.165163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.165184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.165192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.171799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.171820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.171827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.178196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.178216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.178223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.184666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.184686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.184694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.189 [2024-07-12 14:32:02.190852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.189 [2024-07-12 14:32:02.190873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.189 [2024-07-12 14:32:02.190880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.196117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.196138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.196146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.202062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.202083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.202090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.208996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.209017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.209025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.215533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.215554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.215562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.222476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.222497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.222505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.230130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.230151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.230159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.237875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.237896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.237903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.246790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.246811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.246819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.254969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.254990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.254997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.262629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.262650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.262661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.270982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.271003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.271012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.279083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.279103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.279111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.286720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.286741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.286748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.293402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.293423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.293431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.301558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.301581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.301589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.309689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.309710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.309718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.317798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.317820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.317828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.326110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.326132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.326139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.333657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.333683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.333691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.340572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.340593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.340601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.347619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.347639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-07-12 14:32:02.347647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.447 [2024-07-12 14:32:02.351299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.447 [2024-07-12 14:32:02.351318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.448 [2024-07-12 14:32:02.351327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.448 [2024-07-12 14:32:02.358150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.448 [2024-07-12 14:32:02.358170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.448 [2024-07-12 14:32:02.358178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.448 [2024-07-12 14:32:02.364208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15520b0) 00:27:10.448 [2024-07-12 14:32:02.364228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.448 [2024-07-12 14:32:02.364236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.448 00:27:10.448 Latency(us) 00:27:10.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.448 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:10.448 nvme0n1 : 2.00 4864.77 608.10 0.00 0.00 3285.50 666.05 9118.05 00:27:10.448 =================================================================================================================== 00:27:10.448 Total : 4864.77 608.10 0.00 0.00 3285.50 666.05 9118.05 00:27:10.448 0 00:27:10.448 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:10.448 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:10.448 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:10.448 | .driver_specific 00:27:10.448 | .nvme_error 00:27:10.448 | .status_code 00:27:10.448 | .command_transient_transport_error' 00:27:10.448 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:10.706 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 314 > 0 )) 00:27:10.706 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2692372 00:27:10.706 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2692372 ']' 00:27:10.706 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2692372 00:27:10.706 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:10.706 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:10.706 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2692372 00:27:10.706 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:10.706 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:10.706 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2692372' 00:27:10.706 killing process with pid 2692372 00:27:10.706 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2692372 00:27:10.706 Received shutdown signal, test time was about 2.000000 seconds 00:27:10.706 00:27:10.706 Latency(us) 00:27:10.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.706 =================================================================================================================== 00:27:10.706 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:10.706 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2692372 00:27:10.964 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:10.964 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:10.964 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:10.964 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:10.964 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:10.964 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2692894 00:27:10.964 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2692894 /var/tmp/bperf.sock 00:27:10.964 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:10.964 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2692894 ']' 00:27:10.964 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:10.964 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:10.964 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:10.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:10.964 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:10.964 14:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.964 [2024-07-12 14:32:02.835139] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:27:10.964 [2024-07-12 14:32:02.835187] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692894 ] 00:27:10.964 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.964 [2024-07-12 14:32:02.890051] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.964 [2024-07-12 14:32:02.969568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.899 14:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:11.899 14:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:11.899 14:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:11.899 14:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:11.899 14:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:11.899 14:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.899 14:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.899 14:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.899 14:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:11.899 14:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.466 nvme0n1 00:27:12.466 14:32:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:12.466 14:32:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.466 14:32:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:12.466 14:32:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.466 14:32:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:12.467 14:32:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:12.467 Running I/O for 2 seconds... 00:27:12.467 [2024-07-12 14:32:04.365663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.467 [2024-07-12 14:32:04.365840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.467 [2024-07-12 14:32:04.365868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.467 [2024-07-12 14:32:04.375409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.467 [2024-07-12 14:32:04.375556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.467 [2024-07-12 14:32:04.375577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.467 [2024-07-12 14:32:04.385086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.467 [2024-07-12 14:32:04.385232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.467 [2024-07-12 14:32:04.385250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.467 [2024-07-12 14:32:04.394668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.467 [2024-07-12 14:32:04.394818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.467 [2024-07-12 14:32:04.394835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.467 [2024-07-12 14:32:04.404334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.467 [2024-07-12 14:32:04.404514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.467 [2024-07-12 14:32:04.404531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.467 [2024-07-12 14:32:04.413973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.467 [2024-07-12 14:32:04.414135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.467 [2024-07-12 14:32:04.414152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.467 [2024-07-12 14:32:04.423632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.467 [2024-07-12 14:32:04.423774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.467 [2024-07-12 14:32:04.423791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.467 [2024-07-12 14:32:04.433216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.467 [2024-07-12 14:32:04.433359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.467 [2024-07-12 14:32:04.433376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.467 [2024-07-12 14:32:04.442842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.467 [2024-07-12 14:32:04.442993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.467 [2024-07-12 14:32:04.443010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.467 [2024-07-12 14:32:04.452437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.467 [2024-07-12 14:32:04.452618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.467 [2024-07-12 14:32:04.452635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.467 [2024-07-12 14:32:04.462076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.467 [2024-07-12 14:32:04.462219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.467 [2024-07-12 14:32:04.462236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.467 [2024-07-12 14:32:04.471750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.467 [2024-07-12 14:32:04.471895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.467 [2024-07-12 14:32:04.471912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.726 [2024-07-12 14:32:04.481547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.726 [2024-07-12 14:32:04.481691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.726 [2024-07-12 14:32:04.481708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.726 [2024-07-12 14:32:04.491133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.726 [2024-07-12 14:32:04.491276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.726 [2024-07-12 14:32:04.491292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.726 [2024-07-12 14:32:04.500661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.726 [2024-07-12 14:32:04.500803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.726 [2024-07-12 14:32:04.500821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.726 [2024-07-12 14:32:04.510240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.726 [2024-07-12 14:32:04.510387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.726 [2024-07-12 14:32:04.510420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.726 [2024-07-12 14:32:04.519896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.726 [2024-07-12 14:32:04.520038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.726 [2024-07-12 14:32:04.520055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.726 [2024-07-12 14:32:04.529536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.726 [2024-07-12 14:32:04.529680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.726 [2024-07-12 14:32:04.529697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.726 [2024-07-12 14:32:04.539133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.726 [2024-07-12 14:32:04.539275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.726 [2024-07-12 14:32:04.539292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.726 [2024-07-12 14:32:04.548677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.726 [2024-07-12 14:32:04.548837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.726 [2024-07-12 14:32:04.548855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.726 [2024-07-12 14:32:04.558246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.726 [2024-07-12 14:32:04.558395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.726 [2024-07-12 14:32:04.558414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.726 [2024-07-12 14:32:04.567844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.726 [2024-07-12 14:32:04.567988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.726 [2024-07-12 14:32:04.568007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.726 [2024-07-12 14:32:04.577402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.726 [2024-07-12 14:32:04.577564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.726 [2024-07-12 14:32:04.577582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.726 [2024-07-12 14:32:04.586951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.726 [2024-07-12 14:32:04.587095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.726 [2024-07-12 14:32:04.587112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.726 [2024-07-12 14:32:04.596508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.726 [2024-07-12 14:32:04.596668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.726 [2024-07-12 14:32:04.596685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.726 [2024-07-12 14:32:04.606083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.726 [2024-07-12 14:32:04.606228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.726 [2024-07-12 14:32:04.606245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.726 [2024-07-12 14:32:04.615605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.727 [2024-07-12 14:32:04.615763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.727 [2024-07-12 14:32:04.615780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.727 [2024-07-12 14:32:04.625406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.727 [2024-07-12 14:32:04.625550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.727 [2024-07-12 14:32:04.625569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.727 [2024-07-12 14:32:04.635119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.727 [2024-07-12 14:32:04.635280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.727 [2024-07-12 14:32:04.635298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.727 [2024-07-12 14:32:04.644831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.727 [2024-07-12 14:32:04.644976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.727 [2024-07-12 14:32:04.644993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.727 [2024-07-12 14:32:04.654383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.727 [2024-07-12 14:32:04.654530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.727 [2024-07-12 14:32:04.654548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.727 [2024-07-12 14:32:04.663902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.727 [2024-07-12 14:32:04.664044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.727 [2024-07-12 14:32:04.664060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.727 [2024-07-12 14:32:04.673487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.727 [2024-07-12 14:32:04.673646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.727 [2024-07-12 14:32:04.673663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.727 [2024-07-12 14:32:04.682993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.727 [2024-07-12 14:32:04.683136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.727 [2024-07-12 14:32:04.683154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.727 [2024-07-12 14:32:04.692568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.727 [2024-07-12 14:32:04.692727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.727 [2024-07-12 14:32:04.692745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.727 [2024-07-12 14:32:04.702121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.727 [2024-07-12 14:32:04.702264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.727 [2024-07-12 14:32:04.702281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.727 [2024-07-12 14:32:04.711640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.727 [2024-07-12 14:32:04.711782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.727 [2024-07-12 14:32:04.711799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.727 [2024-07-12 14:32:04.721199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.727 [2024-07-12 14:32:04.721359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.727 [2024-07-12 14:32:04.721375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.727 [2024-07-12 14:32:04.730799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.727 [2024-07-12 14:32:04.730943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.727 [2024-07-12 14:32:04.730961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.740584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.740746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.740764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.750170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.750311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.750328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.759672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.759813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.759830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.769272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.769437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.769455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.778813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.778952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.778968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.788320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.788484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.788502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.797894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.798035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.798052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.807357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.807505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.807523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.817142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.817286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.817309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.826938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.827085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.827103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.836563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.836724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.836741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.846196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.846335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.846352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.855712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.855854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.855871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.865273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.865429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.865446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.874881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.875024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.875057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.884615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.884776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.884794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.894351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.894518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.894535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.903945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.904092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.904109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.913478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.913621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.913638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.923075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.923236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.923252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.932671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.932812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.932828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.942232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.942373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.942394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.951838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.951979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.951996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.961347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.961494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.961511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.970937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.986 [2024-07-12 14:32:04.971097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.986 [2024-07-12 14:32:04.971114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.986 [2024-07-12 14:32:04.980475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.987 [2024-07-12 14:32:04.980633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.987 [2024-07-12 14:32:04.980650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.987 [2024-07-12 14:32:04.990062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:12.987 [2024-07-12 14:32:04.990207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.987 [2024-07-12 14:32:04.990224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:04.999828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:04.999971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:04.999988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.009392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.009533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.009550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.018940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.019098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.019115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.028527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.028669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.028686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.038122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.038263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.038280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.047704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.047845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.047862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.057278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.057428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.057444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.066850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.066993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.067016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.076372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.076520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.076537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.085888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.086027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.086044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.095464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.095625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.095654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.105014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.105155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.105172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.114537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.114698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.114715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.124132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.124273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.124290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.133780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.133942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.133961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.143571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.143715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.143733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.153477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.153625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.153642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.163221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.163363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.163384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.172775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.172934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.172951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.182347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.182495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.182512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.192044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.192187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.192205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.201627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.201767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.201784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.211131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.211273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.211289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.220702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.220861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.220878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.230267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.230410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.230428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.239784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.239926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.246 [2024-07-12 14:32:05.239943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.246 [2024-07-12 14:32:05.249402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.246 [2024-07-12 14:32:05.249550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.247 [2024-07-12 14:32:05.249567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.506 [2024-07-12 14:32:05.259193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.506 [2024-07-12 14:32:05.259337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.506 [2024-07-12 14:32:05.259354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.506 [2024-07-12 14:32:05.268781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.506 [2024-07-12 14:32:05.268940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.506 [2024-07-12 14:32:05.268957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.506 [2024-07-12 14:32:05.278312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.506 [2024-07-12 14:32:05.278461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.506 [2024-07-12 14:32:05.278478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.506 [2024-07-12 14:32:05.287863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.506 [2024-07-12 14:32:05.288003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.506 [2024-07-12 14:32:05.288020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.506 [2024-07-12 14:32:05.297421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.297579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.297597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.306929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.307071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.307088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.316474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.316635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.316655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.326044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.326185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.326202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.335552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.335693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.335710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.345088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.345250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.345267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.354670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.354825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.354841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.364284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.364445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.364462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.373836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.373978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.373995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.383466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.383615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.383633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.393027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.393184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.393202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.402779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.402938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.402959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.412505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.412649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.412666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.422176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.422335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.422353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.431705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.431848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.431864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.441293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.441460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.441478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.450844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.450985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.451002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.460287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.460463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.460481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.469948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.470090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.470107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.479464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.479605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.479622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.489051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.489214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.489231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.498602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.498743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.498759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.507 [2024-07-12 14:32:05.508117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.507 [2024-07-12 14:32:05.508261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.507 [2024-07-12 14:32:05.508278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.766 [2024-07-12 14:32:05.517949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.766 [2024-07-12 14:32:05.518095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.766 [2024-07-12 14:32:05.518112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.766 [2024-07-12 14:32:05.527506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.766 [2024-07-12 14:32:05.527648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.766 [2024-07-12 14:32:05.527665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.766 [2024-07-12 14:32:05.537067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.766 [2024-07-12 14:32:05.537227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.766 [2024-07-12 14:32:05.537244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.766 [2024-07-12 14:32:05.546713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.766 [2024-07-12 14:32:05.546855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.766 [2024-07-12 14:32:05.546872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.766 [2024-07-12 14:32:05.556241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.766 [2024-07-12 14:32:05.556400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.766 [2024-07-12 14:32:05.556417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.766 [2024-07-12 14:32:05.565785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.766 [2024-07-12 14:32:05.565926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.766 [2024-07-12 14:32:05.565943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.766 [2024-07-12 14:32:05.575385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.766 [2024-07-12 14:32:05.575548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.766 [2024-07-12 14:32:05.575566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.766 [2024-07-12 14:32:05.584957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.766 [2024-07-12 14:32:05.585100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.766 [2024-07-12 14:32:05.585117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.766 [2024-07-12 14:32:05.594514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.766 [2024-07-12 14:32:05.594657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.766 [2024-07-12 14:32:05.594674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.766 [2024-07-12 14:32:05.604303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.766 [2024-07-12 14:32:05.604452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.766 [2024-07-12 14:32:05.604470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.766 [2024-07-12 14:32:05.614002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.766 [2024-07-12 14:32:05.614144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.766 [2024-07-12 14:32:05.614161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.766 [2024-07-12 14:32:05.623638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.766 [2024-07-12 14:32:05.623779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.766 [2024-07-12 14:32:05.623795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.766 [2024-07-12 14:32:05.633170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.766 [2024-07-12 14:32:05.633310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.766 [2024-07-12 14:32:05.633327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.766 [2024-07-12 14:32:05.642717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.767 [2024-07-12 14:32:05.642859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.767 [2024-07-12 14:32:05.642875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.767 [2024-07-12 14:32:05.652281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.767 [2024-07-12 14:32:05.652469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.767 [2024-07-12 14:32:05.652490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.767 [2024-07-12 14:32:05.662091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.767 [2024-07-12 14:32:05.662254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.767 [2024-07-12 14:32:05.662271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.767 [2024-07-12 14:32:05.671796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.767 [2024-07-12 14:32:05.671941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.767 [2024-07-12 14:32:05.671958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.767 [2024-07-12 14:32:05.681471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.767 [2024-07-12 14:32:05.681633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.767 [2024-07-12 14:32:05.681650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.767 [2024-07-12 14:32:05.691058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.767 [2024-07-12 14:32:05.691218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.767 [2024-07-12 14:32:05.691235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.767 [2024-07-12 14:32:05.700622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.767 [2024-07-12 14:32:05.700763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.767 [2024-07-12 14:32:05.700780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.767 [2024-07-12 14:32:05.710114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.767 [2024-07-12 14:32:05.710257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.767 [2024-07-12 14:32:05.710274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.767 [2024-07-12 14:32:05.719753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.767 [2024-07-12 14:32:05.719900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.767 [2024-07-12 14:32:05.719917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.767 [2024-07-12 14:32:05.729308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.767 [2024-07-12 14:32:05.729478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.767 [2024-07-12 14:32:05.729495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.767 [2024-07-12 14:32:05.738865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.767 [2024-07-12 14:32:05.739010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.767 [2024-07-12 14:32:05.739027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.767 [2024-07-12 14:32:05.748393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.767 [2024-07-12 14:32:05.748538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.767 [2024-07-12 14:32:05.748554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.767 [2024-07-12 14:32:05.757949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.767 [2024-07-12 14:32:05.758108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.767 [2024-07-12 14:32:05.758126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.767 [2024-07-12 14:32:05.767546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:13.767 [2024-07-12 14:32:05.767707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.767 [2024-07-12 14:32:05.767723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.777415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.777564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.777582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.786974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.787117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.787133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.796527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.796671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.796687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.806079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.806221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.806238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.815607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.815748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.815764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.825221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.825363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.825384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.834775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.834935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.834953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.844339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.844507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.844524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.853866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.854009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.854027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.863389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.863548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.863565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.873021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.873161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.873177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.882622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.882764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.882781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.892140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.892282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.892298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.901728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.901873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.901894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.911217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.911374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.911396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.921028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.921187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.921205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.026 [2024-07-12 14:32:05.930764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.026 [2024-07-12 14:32:05.930926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.026 [2024-07-12 14:32:05.930944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.027 [2024-07-12 14:32:05.940417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.027 [2024-07-12 14:32:05.940580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.027 [2024-07-12 14:32:05.940597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.027 [2024-07-12 14:32:05.949970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.027 [2024-07-12 14:32:05.950112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.027 [2024-07-12 14:32:05.950129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.027 [2024-07-12 14:32:05.959529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.027 [2024-07-12 14:32:05.959689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.027 [2024-07-12 14:32:05.959706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.027 [2024-07-12 14:32:05.969128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.027 [2024-07-12 14:32:05.969271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.027 [2024-07-12 14:32:05.969288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.027 [2024-07-12 14:32:05.978662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.027 [2024-07-12 14:32:05.978803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.027 [2024-07-12 14:32:05.978820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.027 [2024-07-12 14:32:05.988253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.027 [2024-07-12 14:32:05.988422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.027 [2024-07-12 14:32:05.988439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.027 [2024-07-12 14:32:05.997830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.027 [2024-07-12 14:32:05.997971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.027 [2024-07-12 14:32:05.997987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.027 [2024-07-12 14:32:06.007355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.027 [2024-07-12 14:32:06.007520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.027 [2024-07-12 14:32:06.007537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.027 [2024-07-12 14:32:06.016907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.027 [2024-07-12 14:32:06.017066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.027 [2024-07-12 14:32:06.017082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.027 [2024-07-12 14:32:06.026460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.027 [2024-07-12 14:32:06.026600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.027 [2024-07-12 14:32:06.026616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.286 [2024-07-12 14:32:06.036200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.286 [2024-07-12 14:32:06.036346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.286 [2024-07-12 14:32:06.036363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.286 [2024-07-12 14:32:06.045809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.286 [2024-07-12 14:32:06.045949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.286 [2024-07-12 14:32:06.045966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.286 [2024-07-12 14:32:06.055342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.286 [2024-07-12 14:32:06.055491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.286 [2024-07-12 14:32:06.055508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.286 [2024-07-12 14:32:06.065089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.286 [2024-07-12 14:32:06.065252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.286 [2024-07-12 14:32:06.065269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.286 [2024-07-12 14:32:06.074694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.286 [2024-07-12 14:32:06.074836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.286 [2024-07-12 14:32:06.074853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.286 [2024-07-12 14:32:06.084180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.286 [2024-07-12 14:32:06.084321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.286 [2024-07-12 14:32:06.084337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.286 [2024-07-12 14:32:06.093848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.286 [2024-07-12 14:32:06.093990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.286 [2024-07-12 14:32:06.094006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.286 [2024-07-12 14:32:06.103333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.286 [2024-07-12 14:32:06.103482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.286 [2024-07-12 14:32:06.103499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.286 [2024-07-12 14:32:06.112889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.286 [2024-07-12 14:32:06.113048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.286 [2024-07-12 14:32:06.113065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.286 [2024-07-12 14:32:06.122456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.286 [2024-07-12 14:32:06.122602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.286 [2024-07-12 14:32:06.122618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.286 [2024-07-12 14:32:06.131988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.286 [2024-07-12 14:32:06.132129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.286 [2024-07-12 14:32:06.132145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.286 [2024-07-12 14:32:06.141525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.286 [2024-07-12 14:32:06.141696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.287 [2024-07-12 14:32:06.141713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.287 [2024-07-12 14:32:06.151072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.287 [2024-07-12 14:32:06.151211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.287 [2024-07-12 14:32:06.151230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.287 [2024-07-12 14:32:06.160810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.287 [2024-07-12 14:32:06.160973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.287 [2024-07-12 14:32:06.160990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.287 [2024-07-12 14:32:06.170390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.287 [2024-07-12 14:32:06.170549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.287 [2024-07-12 14:32:06.170566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.287 [2024-07-12 14:32:06.180198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.287 [2024-07-12 14:32:06.180357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.287 [2024-07-12 14:32:06.180375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.287 [2024-07-12 14:32:06.190076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.287 [2024-07-12 14:32:06.190239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.287 [2024-07-12 14:32:06.190257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.287 [2024-07-12 14:32:06.199768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.287 [2024-07-12 14:32:06.199910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.287 [2024-07-12 14:32:06.199929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.287 [2024-07-12 14:32:06.209333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.287 [2024-07-12 14:32:06.209504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.287 [2024-07-12 14:32:06.209522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.287 [2024-07-12 14:32:06.218851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.287 [2024-07-12 14:32:06.218993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.287 [2024-07-12 14:32:06.219010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.287 [2024-07-12 14:32:06.228429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.287 [2024-07-12 14:32:06.228592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.287 [2024-07-12 14:32:06.228610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.287 [2024-07-12 14:32:06.238000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.287 [2024-07-12 14:32:06.238147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.287 [2024-07-12 14:32:06.238163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.287 [2024-07-12 14:32:06.247508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.287 [2024-07-12 14:32:06.247651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.287 [2024-07-12 14:32:06.247668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.287 [2024-07-12 14:32:06.257066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.287 [2024-07-12 14:32:06.257224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.287 [2024-07-12 14:32:06.257241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.287 [2024-07-12 14:32:06.266632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.287 [2024-07-12 14:32:06.266775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.287 [2024-07-12 14:32:06.266792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.287 [2024-07-12 14:32:06.276164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.287 [2024-07-12 14:32:06.276307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.287 [2024-07-12 14:32:06.276324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.287 [2024-07-12 14:32:06.285725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.287 [2024-07-12 14:32:06.285868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.287 [2024-07-12 14:32:06.285884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.546 [2024-07-12 14:32:06.295444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.546 [2024-07-12 14:32:06.295616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.546 [2024-07-12 14:32:06.295633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.546 [2024-07-12 14:32:06.305127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.546 [2024-07-12 14:32:06.305285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.546 [2024-07-12 14:32:06.305302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.546 [2024-07-12 14:32:06.314682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.546 [2024-07-12 14:32:06.314824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.546 [2024-07-12 14:32:06.314840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.546 [2024-07-12 14:32:06.324199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.546 [2024-07-12 14:32:06.324340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.546 [2024-07-12 14:32:06.324356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.546 [2024-07-12 14:32:06.333776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.546 [2024-07-12 14:32:06.333917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.546 [2024-07-12 14:32:06.333934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.546 [2024-07-12 14:32:06.343286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.546 [2024-07-12 14:32:06.343434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.546 [2024-07-12 14:32:06.343451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.546 [2024-07-12 14:32:06.352844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab4d0) with pdu=0x2000190feb58 00:27:14.546 [2024-07-12 14:32:06.353003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.546 [2024-07-12 14:32:06.353020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.546 00:27:14.546 Latency(us) 00:27:14.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.546 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:14.546 nvme0n1 : 2.00 26509.08 103.55 0.00 0.00 4820.15 4587.52 13791.05 00:27:14.546 =================================================================================================================== 00:27:14.546 Total : 26509.08 103.55 0.00 0.00 4820.15 4587.52 13791.05 00:27:14.546 0 00:27:14.546 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:14.546 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:14.546 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:14.546 | .driver_specific 00:27:14.546 | .nvme_error 00:27:14.546 | .status_code 00:27:14.546 | .command_transient_transport_error' 00:27:14.546 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:14.546 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:27:14.546 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2692894 00:27:14.546 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2692894 ']' 00:27:14.546 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2692894 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2692894 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2692894' 00:27:14.807 killing process with pid 2692894 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2692894 00:27:14.807 Received shutdown signal, test time was about 2.000000 seconds 00:27:14.807 00:27:14.807 Latency(us) 00:27:14.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.807 =================================================================================================================== 00:27:14.807 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2692894 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2693552 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2693552 /var/tmp/bperf.sock 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2693552 ']' 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:14.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:14.807 14:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.066 [2024-07-12 14:32:06.820976] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:27:15.066 [2024-07-12 14:32:06.821025] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693552 ] 00:27:15.066 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:15.066 Zero copy mechanism will not be used. 00:27:15.066 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.066 [2024-07-12 14:32:06.875118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.066 [2024-07-12 14:32:06.943372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.633 14:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:15.633 14:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:15.633 14:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:15.633 14:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:15.892 14:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:15.892 14:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.892 14:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.892 14:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.892 14:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:15.892 14:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:16.150 nvme0n1 00:27:16.150 14:32:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:16.150 14:32:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.150 14:32:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:16.150 14:32:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.150 14:32:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:16.150 14:32:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:16.410 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:16.410 Zero copy mechanism will not be used. 00:27:16.410 Running I/O for 2 seconds... 00:27:16.410 [2024-07-12 14:32:08.182031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.182384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.182414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.189701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.190042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.190065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.196978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.197315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.197336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.204295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.204655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.204677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.211102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.211455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.211476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.217778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.218110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.218134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.224319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.224681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.224700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.232099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.232485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.232505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.239094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.239429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.239448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.246116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.246456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.246476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.252655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.252990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.253010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.259458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.259808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.259828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.266269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.266617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.266648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.273021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.273367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.273392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.279226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.279440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.279463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.286105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.286477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.286496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.292443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.292764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.292784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.297990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.298305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.298324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.304493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.304895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.304915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.310487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.310772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.310790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.316280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.316581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.316600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.321375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.321710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.321729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.326312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.326614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.326634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.332885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.333187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.333206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.338986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.339274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.339294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.344870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.345261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.345280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.410 [2024-07-12 14:32:08.350928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.410 [2024-07-12 14:32:08.351211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.410 [2024-07-12 14:32:08.351229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.411 [2024-07-12 14:32:08.356129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.411 [2024-07-12 14:32:08.356425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.411 [2024-07-12 14:32:08.356444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.411 [2024-07-12 14:32:08.361864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.411 [2024-07-12 14:32:08.362153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.411 [2024-07-12 14:32:08.362171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.411 [2024-07-12 14:32:08.367018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.411 [2024-07-12 14:32:08.367318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.411 [2024-07-12 14:32:08.367337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.411 [2024-07-12 14:32:08.372754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.411 [2024-07-12 14:32:08.373038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.411 [2024-07-12 14:32:08.373057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.411 [2024-07-12 14:32:08.377221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.411 [2024-07-12 14:32:08.377527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.411 [2024-07-12 14:32:08.377550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.411 [2024-07-12 14:32:08.381291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.411 [2024-07-12 14:32:08.381575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.411 [2024-07-12 14:32:08.381595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.411 [2024-07-12 14:32:08.386037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.411 [2024-07-12 14:32:08.386319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.411 [2024-07-12 14:32:08.386338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.411 [2024-07-12 14:32:08.390010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.411 [2024-07-12 14:32:08.390269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.411 [2024-07-12 14:32:08.390289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.411 [2024-07-12 14:32:08.393775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.411 [2024-07-12 14:32:08.394006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.411 [2024-07-12 14:32:08.394026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.411 [2024-07-12 14:32:08.397525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.411 [2024-07-12 14:32:08.397757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.411 [2024-07-12 14:32:08.397776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.411 [2024-07-12 14:32:08.401265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.411 [2024-07-12 14:32:08.401506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.411 [2024-07-12 14:32:08.401526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.411 [2024-07-12 14:32:08.405057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.411 [2024-07-12 14:32:08.405273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.411 [2024-07-12 14:32:08.405292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.411 [2024-07-12 14:32:08.409044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.411 [2024-07-12 14:32:08.409263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.411 [2024-07-12 14:32:08.409282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.411 [2024-07-12 14:32:08.412920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.411 [2024-07-12 14:32:08.413138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.411 [2024-07-12 14:32:08.413157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.671 [2024-07-12 14:32:08.416891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.671 [2024-07-12 14:32:08.417118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.671 [2024-07-12 14:32:08.417137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.671 [2024-07-12 14:32:08.421324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.671 [2024-07-12 14:32:08.421549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.671 [2024-07-12 14:32:08.421568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.671 [2024-07-12 14:32:08.425847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.671 [2024-07-12 14:32:08.426067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.671 [2024-07-12 14:32:08.426087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.671 [2024-07-12 14:32:08.429925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.671 [2024-07-12 14:32:08.430159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.671 [2024-07-12 14:32:08.430178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.671 [2024-07-12 14:32:08.433919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.671 [2024-07-12 14:32:08.434154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.671 [2024-07-12 14:32:08.434172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.671 [2024-07-12 14:32:08.437992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.438211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.438231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.441958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.442183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.442203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.445933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.446155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.446179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.450316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.450543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.450563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.454510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.454735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.454754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.458576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.458799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.458818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.462653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.462885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.462904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.466616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.466852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.466871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.470628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.470880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.470899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.474472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.474704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.474723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.478695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.478925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.478944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.483281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.483509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.483529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.487502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.487723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.487742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.491937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.492176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.492195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.496193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.496427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.496446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.499953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.500191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.500210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.503792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.504008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.504027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.507926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.508261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.508281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.512999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.513354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.513374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.517370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.517607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.517639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.521334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.521576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.521596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.525188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.525440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.525459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.529148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.529388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.529407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.534197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.534514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.534533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.538912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.539173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.539192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.542902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.543141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.543160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.546898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.547118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.547138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.550759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.550978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.550997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.672 [2024-07-12 14:32:08.555964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.672 [2024-07-12 14:32:08.556216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.672 [2024-07-12 14:32:08.556239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.560929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.561098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.561115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.565983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.566278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.566298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.570769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.571030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.571049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.576135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.576354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.576374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.580793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.581027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.581047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.585912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.586156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.586174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.591340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.591572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.591591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.596368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.596633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.596652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.601120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.601359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.601386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.606413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.606664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.606684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.611108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.611340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.611359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.616144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.616363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.616388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.620925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.621195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.621215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.625853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.626086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.626105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.630839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.631043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.631062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.635840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.636046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.636065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.640783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.641155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.641174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.645488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.645702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.645721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.649503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.649722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.649741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.653341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.653575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.653595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.657276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.657519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.657538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.661156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.661360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.661386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.665101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.665293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.665317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.669007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.669211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.669229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.672893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.673113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.673132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.673 [2024-07-12 14:32:08.677350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.673 [2024-07-12 14:32:08.677578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.673 [2024-07-12 14:32:08.677602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.682140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.934 [2024-07-12 14:32:08.682394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.934 [2024-07-12 14:32:08.682414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.686226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.934 [2024-07-12 14:32:08.686475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.934 [2024-07-12 14:32:08.686493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.690247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.934 [2024-07-12 14:32:08.690455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.934 [2024-07-12 14:32:08.690474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.694193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.934 [2024-07-12 14:32:08.694409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.934 [2024-07-12 14:32:08.694428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.698037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.934 [2024-07-12 14:32:08.698248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.934 [2024-07-12 14:32:08.698268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.702094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.934 [2024-07-12 14:32:08.702291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.934 [2024-07-12 14:32:08.702309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.706892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.934 [2024-07-12 14:32:08.707096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.934 [2024-07-12 14:32:08.707115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.711303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.934 [2024-07-12 14:32:08.711510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.934 [2024-07-12 14:32:08.711529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.715205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.934 [2024-07-12 14:32:08.715414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.934 [2024-07-12 14:32:08.715432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.719056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.934 [2024-07-12 14:32:08.719276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.934 [2024-07-12 14:32:08.719294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.722978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.934 [2024-07-12 14:32:08.723199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.934 [2024-07-12 14:32:08.723218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.726915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.934 [2024-07-12 14:32:08.727132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.934 [2024-07-12 14:32:08.727152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.730803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.934 [2024-07-12 14:32:08.731026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.934 [2024-07-12 14:32:08.731045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.734813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.934 [2024-07-12 14:32:08.735035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.934 [2024-07-12 14:32:08.735054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.738665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.934 [2024-07-12 14:32:08.738889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.934 [2024-07-12 14:32:08.738909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.742548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.934 [2024-07-12 14:32:08.742758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.934 [2024-07-12 14:32:08.742777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.934 [2024-07-12 14:32:08.746642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.746846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.746868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.751552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.751750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.751772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.755653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.755846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.755870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.760094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.760309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.760327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.764415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.764657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.764675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.769064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.769373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.769398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.774369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.774620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.774639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.779744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.780004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.780023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.785324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.785495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.785513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.791474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.791588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.791606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.797131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.797316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.797334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.803509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.803640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.803659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.809591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.809778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.809796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.816106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.816247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.816264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.822397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.822511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.822528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.828751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.828929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.828947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.835260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.835405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.835423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.841505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.841649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.841667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.848091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.848486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.848506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.854332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.854520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.854538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.859824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.859977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.859994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.865203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.865292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.865309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.870331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.870443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.870460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.874870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.874943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.874961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.879250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.879385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.879403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.884030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.884121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.884138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.889249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.889318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.889343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.894252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.894323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.894341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.900145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.935 [2024-07-12 14:32:08.900249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.935 [2024-07-12 14:32:08.900268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.935 [2024-07-12 14:32:08.905441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.936 [2024-07-12 14:32:08.905589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.936 [2024-07-12 14:32:08.905606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.936 [2024-07-12 14:32:08.910761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.936 [2024-07-12 14:32:08.910941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.936 [2024-07-12 14:32:08.910959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.936 [2024-07-12 14:32:08.916604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.936 [2024-07-12 14:32:08.916711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.936 [2024-07-12 14:32:08.916729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.936 [2024-07-12 14:32:08.921867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.936 [2024-07-12 14:32:08.921998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.936 [2024-07-12 14:32:08.922016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.936 [2024-07-12 14:32:08.927208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.936 [2024-07-12 14:32:08.927342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.936 [2024-07-12 14:32:08.927360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.936 [2024-07-12 14:32:08.932409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.936 [2024-07-12 14:32:08.932563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.936 [2024-07-12 14:32:08.932581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.936 [2024-07-12 14:32:08.938157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:16.936 [2024-07-12 14:32:08.938306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.936 [2024-07-12 14:32:08.938324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.196 [2024-07-12 14:32:08.943552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.196 [2024-07-12 14:32:08.943738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.196 [2024-07-12 14:32:08.943756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.196 [2024-07-12 14:32:08.948918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.196 [2024-07-12 14:32:08.949064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.196 [2024-07-12 14:32:08.949083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.196 [2024-07-12 14:32:08.954020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.196 [2024-07-12 14:32:08.954319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.196 [2024-07-12 14:32:08.954338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.196 [2024-07-12 14:32:08.959635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.196 [2024-07-12 14:32:08.959810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.196 [2024-07-12 14:32:08.959827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.196 [2024-07-12 14:32:08.965102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.196 [2024-07-12 14:32:08.965223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.196 [2024-07-12 14:32:08.965241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.196 [2024-07-12 14:32:08.970372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.196 [2024-07-12 14:32:08.970520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.196 [2024-07-12 14:32:08.970537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.196 [2024-07-12 14:32:08.975521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.196 [2024-07-12 14:32:08.975625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.196 [2024-07-12 14:32:08.975643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.196 [2024-07-12 14:32:08.981030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.196 [2024-07-12 14:32:08.981172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.196 [2024-07-12 14:32:08.981189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.196 [2024-07-12 14:32:08.986327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.196 [2024-07-12 14:32:08.986491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.196 [2024-07-12 14:32:08.986509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.196 [2024-07-12 14:32:08.992028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.196 [2024-07-12 14:32:08.992109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.196 [2024-07-12 14:32:08.992127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.196 [2024-07-12 14:32:08.997279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.196 [2024-07-12 14:32:08.997446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.196 [2024-07-12 14:32:08.997463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.196 [2024-07-12 14:32:09.003695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.196 [2024-07-12 14:32:09.003788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.196 [2024-07-12 14:32:09.003806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.196 [2024-07-12 14:32:09.009714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.196 [2024-07-12 14:32:09.009862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.196 [2024-07-12 14:32:09.009880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.016062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.016199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.016216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.021655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.021816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.021833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.026978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.027134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.027152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.032513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.032682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.032703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.037809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.037994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.038012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.042939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.043093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.043111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.048530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.048676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.048694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.053663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.053854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.053872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.058881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.059056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.059074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.063991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.064111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.064128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.069501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.069703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.069723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.074871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.075020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.075038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.080591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.080718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.080736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.086049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.086211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.086229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.091918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.092069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.092087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.098051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.098189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.098207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.103959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.104143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.104161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.109180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.109364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.109387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.114284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.114438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.114455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.119406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.119520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.119538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.124586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.124674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.124692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.129846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.130010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.130028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.135191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.135320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.135338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.140699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.140878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.140896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.146328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.146468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.146486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.151514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.151663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.151681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.156875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.157017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.157034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.162328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.162504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.162522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.197 [2024-07-12 14:32:09.167510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.197 [2024-07-12 14:32:09.167601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.197 [2024-07-12 14:32:09.167630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.198 [2024-07-12 14:32:09.172918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.198 [2024-07-12 14:32:09.173055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.198 [2024-07-12 14:32:09.173077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.198 [2024-07-12 14:32:09.178101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.198 [2024-07-12 14:32:09.178241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.198 [2024-07-12 14:32:09.178259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.198 [2024-07-12 14:32:09.183407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.198 [2024-07-12 14:32:09.183575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.198 [2024-07-12 14:32:09.183594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.198 [2024-07-12 14:32:09.189205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.198 [2024-07-12 14:32:09.189363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.198 [2024-07-12 14:32:09.189389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.198 [2024-07-12 14:32:09.194580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.198 [2024-07-12 14:32:09.194721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.198 [2024-07-12 14:32:09.194741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.198 [2024-07-12 14:32:09.199839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.198 [2024-07-12 14:32:09.199932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.198 [2024-07-12 14:32:09.199951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.458 [2024-07-12 14:32:09.204985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.458 [2024-07-12 14:32:09.205125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.205143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.210551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.210703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.210721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.215937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.216106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.216124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.221066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.221183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.221201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.226483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.226627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.226644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.231830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.232006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.232023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.237194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.237390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.237408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.242605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.242721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.242738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.247853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.248031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.248048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.253111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.253287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.253304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.258464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.258634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.258652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.263797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.263976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.263997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.269784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.269936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.269954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.275526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.275705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.275722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.281775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.281917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.281935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.286625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.286695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.286713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.291361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.291471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.291489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.295341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.295464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.295481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.299126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.299211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.299229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.302900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.303001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.303018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.306653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.306742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.306760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.310609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.310714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.310732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.315646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.315699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.315717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.459 [2024-07-12 14:32:09.320098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.459 [2024-07-12 14:32:09.320179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.459 [2024-07-12 14:32:09.320197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.324109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.324179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.324197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.328010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.328071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.328089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.331924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.331976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.331994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.335817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.335880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.335898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.339659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.339714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.339732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.343567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.343678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.343695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.347492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.347565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.347583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.351321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.351394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.351413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.355271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.355361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.355385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.359655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.359745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.359762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.363421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.363501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.363519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.367206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.367295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.367314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.371130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.371234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.371252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.375554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.375652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.375673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.379418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.379496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.379514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.383214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.383293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.383312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.387105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.387165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.387199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.391290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.391360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.391384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.395833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.395960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.395978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.399805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.399857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.399875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.403680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.403738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.403755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.407417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.407542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.407559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.412017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.412148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.412166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.417193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.417306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.417324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.422298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.422567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.422586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.427658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.427821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.427840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.434195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.434360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.460 [2024-07-12 14:32:09.434383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.460 [2024-07-12 14:32:09.439599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.460 [2024-07-12 14:32:09.439754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.461 [2024-07-12 14:32:09.439771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.461 [2024-07-12 14:32:09.444812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.461 [2024-07-12 14:32:09.444942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.461 [2024-07-12 14:32:09.444960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.461 [2024-07-12 14:32:09.449995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.461 [2024-07-12 14:32:09.450152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.461 [2024-07-12 14:32:09.450170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.461 [2024-07-12 14:32:09.455185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.461 [2024-07-12 14:32:09.455359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.461 [2024-07-12 14:32:09.455382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.461 [2024-07-12 14:32:09.461029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.461 [2024-07-12 14:32:09.461194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.461 [2024-07-12 14:32:09.461212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.721 [2024-07-12 14:32:09.466092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.721 [2024-07-12 14:32:09.466253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.721 [2024-07-12 14:32:09.466270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.721 [2024-07-12 14:32:09.471277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.721 [2024-07-12 14:32:09.471450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.721 [2024-07-12 14:32:09.471469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.721 [2024-07-12 14:32:09.476615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.721 [2024-07-12 14:32:09.476782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.721 [2024-07-12 14:32:09.476800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.721 [2024-07-12 14:32:09.482480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.721 [2024-07-12 14:32:09.482622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.721 [2024-07-12 14:32:09.482640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.721 [2024-07-12 14:32:09.487983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.721 [2024-07-12 14:32:09.488114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.721 [2024-07-12 14:32:09.488132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.721 [2024-07-12 14:32:09.493277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.721 [2024-07-12 14:32:09.493445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.721 [2024-07-12 14:32:09.493463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.721 [2024-07-12 14:32:09.498649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.721 [2024-07-12 14:32:09.498804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.721 [2024-07-12 14:32:09.498821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.721 [2024-07-12 14:32:09.503669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.721 [2024-07-12 14:32:09.503771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.721 [2024-07-12 14:32:09.503792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.721 [2024-07-12 14:32:09.508856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.721 [2024-07-12 14:32:09.508979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.508997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.514300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.514434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.514453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.519641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.519796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.519814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.525189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.525324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.525342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.530255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.530409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.530427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.535303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.535472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.535490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.540146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.540241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.540260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.544895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.544981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.544999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.550134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.550299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.550317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.554946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.555070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.555087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.560301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.560360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.560383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.565385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.565503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.565520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.569661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.569754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.569772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.573670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.573765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.573783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.577581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.577666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.577684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.581969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.582062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.582080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.586856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.586985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.587006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.590926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.591008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.591027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.594801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.594863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.594880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.598638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.598708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.598726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.602778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.602847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.602865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.606685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.606813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.606831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.611244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.611418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.611436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.616320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.616420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.616437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.620297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.620431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.620449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.624170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.624302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.624320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.628108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.628231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.628249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.632216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.632347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.632365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.636114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.722 [2024-07-12 14:32:09.636227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.722 [2024-07-12 14:32:09.636245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.722 [2024-07-12 14:32:09.640012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.640132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.640150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.643928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.644013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.644031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.647833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.647952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.647970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.651639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.651757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.651774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.655928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.656114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.656132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.660902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.661069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.661087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.665352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.665483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.665501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.669280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.669390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.669408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.673097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.673206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.673224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.676981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.677093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.677110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.680832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.680935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.680953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.684726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.684839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.684857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.688605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.688700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.688718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.692994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.693094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.693116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.698166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.698255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.698273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.703311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.703464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.703483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.709122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.709311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.709329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.714424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.714555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.714573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.719307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.719428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.719447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.723 [2024-07-12 14:32:09.723848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.723 [2024-07-12 14:32:09.723986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.723 [2024-07-12 14:32:09.724004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.728417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.728513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.728530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.733100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.733206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.733224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.737187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.737257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.737275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.741102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.741161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.741179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.744972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.745043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.745060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.748862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.748918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.748936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.752846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.752899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.752916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.756751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.756808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.756826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.760916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.760991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.761009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.764731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.764799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.764817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.768530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.768625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.768642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.772256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.772334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.772352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.775993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.776078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.776095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.779731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.779813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.779831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.783505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.783596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.783614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.787206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.787259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.787277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.791433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.791554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.791572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.796055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.796132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.796150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.800348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.800407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.800426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.804375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.804449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.804470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.808386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.808447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.808465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.812357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.812432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.812451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.816250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.816325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.816345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.820214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.820270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.820288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.824113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.824186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.824209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.828107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.828175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.984 [2024-07-12 14:32:09.828194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.984 [2024-07-12 14:32:09.832046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.984 [2024-07-12 14:32:09.832170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.832188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.836156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.836258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.836277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.840074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.840134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.840152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.844011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.844083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.844101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.848173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.848300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.848317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.853007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.853172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.853190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.857956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.858123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.858140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.863350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.863461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.863479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.869362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.869529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.869547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.874970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.875112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.875130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.881353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.881461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.881479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.887763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.887935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.887953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.893870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.894063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.894089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.900461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.900587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.900605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.907111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.907268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.907286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.913383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.913497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.913515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.919547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.919691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.919709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.925979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.926156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.926173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.932437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.932544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.932562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.938722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.938879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.938901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.944755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.944862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.944880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.949990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.950080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.950098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.954631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.954727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.954746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.958994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.959116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.959134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.963037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.963115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.963134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.967234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.967342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.967359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.972135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.972281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.972298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.977241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.977361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.977386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.982637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.982898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.985 [2024-07-12 14:32:09.982917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.985 [2024-07-12 14:32:09.988190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:17.985 [2024-07-12 14:32:09.988335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.986 [2024-07-12 14:32:09.988353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.245 [2024-07-12 14:32:09.993116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.245 [2024-07-12 14:32:09.993201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.245 [2024-07-12 14:32:09.993220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.245 [2024-07-12 14:32:09.997218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.245 [2024-07-12 14:32:09.997327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.245 [2024-07-12 14:32:09.997345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.245 [2024-07-12 14:32:10.001341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.245 [2024-07-12 14:32:10.001482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.245 [2024-07-12 14:32:10.001500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.245 [2024-07-12 14:32:10.005425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.245 [2024-07-12 14:32:10.005523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.245 [2024-07-12 14:32:10.005542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.245 [2024-07-12 14:32:10.010073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.245 [2024-07-12 14:32:10.010172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.245 [2024-07-12 14:32:10.010191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.245 [2024-07-12 14:32:10.015271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.245 [2024-07-12 14:32:10.015352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.245 [2024-07-12 14:32:10.015372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.245 [2024-07-12 14:32:10.019221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.245 [2024-07-12 14:32:10.019319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.245 [2024-07-12 14:32:10.019341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.245 [2024-07-12 14:32:10.024107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.245 [2024-07-12 14:32:10.024283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.245 [2024-07-12 14:32:10.024302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.245 [2024-07-12 14:32:10.028797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.245 [2024-07-12 14:32:10.028883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.245 [2024-07-12 14:32:10.028901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.245 [2024-07-12 14:32:10.032708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.245 [2024-07-12 14:32:10.032827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.245 [2024-07-12 14:32:10.032845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.245 [2024-07-12 14:32:10.036633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.245 [2024-07-12 14:32:10.036765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.245 [2024-07-12 14:32:10.036783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.040517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.040628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.040646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.044531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.044665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.044683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.048553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.048658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.048676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.052478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.052598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.052615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.056352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.056429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.056448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.060229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.060350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.060368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.064705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.064852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.064870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.069735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.069823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.069841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.074399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.074489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.074507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.080288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.080383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.080404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.084985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.085078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.085097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.089159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.089232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.089251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.093084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.093168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.093187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.097032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.097123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.097141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.101125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.101186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.101204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.106160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.106213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.106231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.110385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.110511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.110529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.114353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.114419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.114438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.118264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.118326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.118345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.122348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.122436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.122453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.127122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.127191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.127209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.131502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.131595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.131617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.135459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.135514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.135532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.139323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.139393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.139411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.143243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.143354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.143372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.147904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.147955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.147973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.152483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.152547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.152564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.156498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.156564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.156582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.160498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.246 [2024-07-12 14:32:10.160572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.246 [2024-07-12 14:32:10.160590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.246 [2024-07-12 14:32:10.164336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.247 [2024-07-12 14:32:10.164422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.247 [2024-07-12 14:32:10.164441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.247 [2024-07-12 14:32:10.168357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.247 [2024-07-12 14:32:10.168469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.247 [2024-07-12 14:32:10.168487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.247 [2024-07-12 14:32:10.172274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.247 [2024-07-12 14:32:10.172353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.247 [2024-07-12 14:32:10.172371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.247 [2024-07-12 14:32:10.176123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16ab810) with pdu=0x2000190fef90 00:27:18.247 [2024-07-12 14:32:10.176191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.247 [2024-07-12 14:32:10.176209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.247 00:27:18.247 Latency(us) 00:27:18.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.247 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:18.247 nvme0n1 : 2.00 6421.80 802.73 0.00 0.00 2487.74 1495.93 8320.22 00:27:18.247 =================================================================================================================== 00:27:18.247 Total : 6421.80 802.73 0.00 0.00 2487.74 1495.93 8320.22 00:27:18.247 0 00:27:18.247 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:18.247 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:18.247 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:18.247 | .driver_specific 00:27:18.247 | .nvme_error 00:27:18.247 | .status_code 00:27:18.247 | .command_transient_transport_error' 00:27:18.247 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:18.506 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 414 > 0 )) 00:27:18.506 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2693552 00:27:18.506 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2693552 ']' 00:27:18.506 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2693552 00:27:18.506 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:18.506 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:18.506 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2693552 00:27:18.506 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:18.506 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:18.506 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2693552' 00:27:18.506 killing process with pid 2693552 00:27:18.506 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2693552 00:27:18.506 Received shutdown signal, test time was about 2.000000 seconds 00:27:18.506 00:27:18.506 Latency(us) 00:27:18.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.506 =================================================================================================================== 00:27:18.506 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:18.506 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2693552 00:27:18.766 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2691431 00:27:18.766 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2691431 ']' 00:27:18.766 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2691431 00:27:18.766 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:18.766 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:18.766 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2691431 00:27:18.766 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:18.766 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:18.766 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2691431' 00:27:18.766 killing process with pid 2691431 00:27:18.766 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2691431 00:27:18.766 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2691431 00:27:19.053 00:27:19.053 real 0m16.763s 00:27:19.053 user 0m32.025s 00:27:19.053 sys 0m4.553s 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:19.053 ************************************ 00:27:19.053 END TEST nvmf_digest_error 00:27:19.053 ************************************ 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:19.053 rmmod nvme_tcp 00:27:19.053 rmmod nvme_fabrics 00:27:19.053 rmmod nvme_keyring 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2691431 ']' 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2691431 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2691431 ']' 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2691431 00:27:19.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2691431) - No such process 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2691431 is not found' 00:27:19.053 Process with pid 2691431 is not found 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.053 14:32:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.587 14:32:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:21.587 00:27:21.588 real 0m40.564s 00:27:21.588 user 1m5.371s 00:27:21.588 sys 0m12.725s 00:27:21.588 14:32:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:21.588 14:32:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:21.588 ************************************ 00:27:21.588 END TEST nvmf_digest 00:27:21.588 ************************************ 00:27:21.588 14:32:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:21.588 14:32:13 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:27:21.588 14:32:13 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:27:21.588 14:32:13 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:27:21.588 14:32:13 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:21.588 14:32:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:21.588 14:32:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.588 14:32:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:21.588 ************************************ 00:27:21.588 START TEST nvmf_bdevperf 00:27:21.588 ************************************ 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:21.588 * Looking for test storage... 00:27:21.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:21.588 14:32:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:26.861 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:26.861 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:26.862 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:26.862 Found net devices under 0000:86:00.0: cvl_0_0 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:26.862 Found net devices under 0000:86:00.1: cvl_0_1 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:26.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:27:26.862 00:27:26.862 --- 10.0.0.2 ping statistics --- 00:27:26.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.862 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:27:26.862 00:27:26.862 --- 10.0.0.1 ping statistics --- 00:27:26.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.862 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2697758 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2697758 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2697758 ']' 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:26.862 14:32:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:26.862 [2024-07-12 14:32:18.705946] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:27:26.862 [2024-07-12 14:32:18.705989] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.862 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.862 [2024-07-12 14:32:18.764036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:26.862 [2024-07-12 14:32:18.836003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.862 [2024-07-12 14:32:18.836044] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.862 [2024-07-12 14:32:18.836051] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.862 [2024-07-12 14:32:18.836057] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.862 [2024-07-12 14:32:18.836061] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.862 [2024-07-12 14:32:18.836163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.862 [2024-07-12 14:32:18.836228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:26.862 [2024-07-12 14:32:18.836229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:27.800 [2024-07-12 14:32:19.548507] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:27.800 Malloc0 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:27.800 [2024-07-12 14:32:19.612703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:27.800 { 00:27:27.800 "params": { 00:27:27.800 "name": "Nvme$subsystem", 00:27:27.800 "trtype": "$TEST_TRANSPORT", 00:27:27.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.800 "adrfam": "ipv4", 00:27:27.800 "trsvcid": "$NVMF_PORT", 00:27:27.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.800 "hdgst": ${hdgst:-false}, 00:27:27.800 "ddgst": ${ddgst:-false} 00:27:27.800 }, 00:27:27.800 "method": "bdev_nvme_attach_controller" 00:27:27.800 } 00:27:27.800 EOF 00:27:27.800 )") 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:27.800 14:32:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:27.800 "params": { 00:27:27.800 "name": "Nvme1", 00:27:27.800 "trtype": "tcp", 00:27:27.800 "traddr": "10.0.0.2", 00:27:27.800 "adrfam": "ipv4", 00:27:27.800 "trsvcid": "4420", 00:27:27.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:27.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:27.800 "hdgst": false, 00:27:27.800 "ddgst": false 00:27:27.800 }, 00:27:27.800 "method": "bdev_nvme_attach_controller" 00:27:27.800 }' 00:27:27.800 [2024-07-12 14:32:19.664782] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:27:27.800 [2024-07-12 14:32:19.664824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697805 ] 00:27:27.800 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.800 [2024-07-12 14:32:19.720174] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.800 [2024-07-12 14:32:19.794335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.368 Running I/O for 1 seconds... 00:27:29.304 00:27:29.304 Latency(us) 00:27:29.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.304 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:29.304 Verification LBA range: start 0x0 length 0x4000 00:27:29.304 Nvme1n1 : 1.01 11179.51 43.67 0.00 0.00 11404.29 2393.49 15272.74 00:27:29.304 =================================================================================================================== 00:27:29.304 Total : 11179.51 43.67 0.00 0.00 11404.29 2393.49 15272.74 00:27:29.562 14:32:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2698143 00:27:29.563 14:32:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:29.563 14:32:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:29.563 14:32:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:29.563 14:32:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:29.563 14:32:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:29.563 14:32:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.563 14:32:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.563 { 00:27:29.563 "params": { 00:27:29.563 "name": "Nvme$subsystem", 00:27:29.563 "trtype": "$TEST_TRANSPORT", 00:27:29.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.563 "adrfam": "ipv4", 00:27:29.563 "trsvcid": "$NVMF_PORT", 00:27:29.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.563 "hdgst": ${hdgst:-false}, 00:27:29.563 "ddgst": ${ddgst:-false} 00:27:29.563 }, 00:27:29.563 "method": "bdev_nvme_attach_controller" 00:27:29.563 } 00:27:29.563 EOF 00:27:29.563 )") 00:27:29.563 14:32:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:29.563 14:32:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:29.563 14:32:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:29.563 14:32:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:29.563 "params": { 00:27:29.563 "name": "Nvme1", 00:27:29.563 "trtype": "tcp", 00:27:29.563 "traddr": "10.0.0.2", 00:27:29.563 "adrfam": "ipv4", 00:27:29.563 "trsvcid": "4420", 00:27:29.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:29.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:29.563 "hdgst": false, 00:27:29.563 "ddgst": false 00:27:29.563 }, 00:27:29.563 "method": "bdev_nvme_attach_controller" 00:27:29.563 }' 00:27:29.563 [2024-07-12 14:32:21.354667] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:27:29.563 [2024-07-12 14:32:21.354715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2698143 ] 00:27:29.563 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.563 [2024-07-12 14:32:21.409252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.563 [2024-07-12 14:32:21.482500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.821 Running I/O for 15 seconds... 00:27:32.353 14:32:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2697758 00:27:32.353 14:32:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:32.353 [2024-07-12 14:32:24.331638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.353 [2024-07-12 14:32:24.331681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.353 [2024-07-12 14:32:24.331700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.353 [2024-07-12 14:32:24.331708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.353 [2024-07-12 14:32:24.331718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.353 [2024-07-12 14:32:24.331726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.353 [2024-07-12 14:32:24.331735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.353 [2024-07-12 14:32:24.331743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.353 [2024-07-12 14:32:24.331752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.353 [2024-07-12 14:32:24.331759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.353 [2024-07-12 14:32:24.331766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.353 [2024-07-12 14:32:24.331773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.353 [2024-07-12 14:32:24.331781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.353 [2024-07-12 14:32:24.331788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.353 [2024-07-12 14:32:24.331796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.353 [2024-07-12 14:32:24.331802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.353 [2024-07-12 14:32:24.331811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.353 [2024-07-12 14:32:24.331818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.353 [2024-07-12 14:32:24.331826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.353 [2024-07-12 14:32:24.331833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.353 [2024-07-12 14:32:24.331841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.353 [2024-07-12 14:32:24.331848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.353 [2024-07-12 14:32:24.331856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.353 [2024-07-12 14:32:24.331863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.353 [2024-07-12 14:32:24.331871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.353 [2024-07-12 14:32:24.331877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.331887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.331894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.331902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.331909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.331917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.331924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.331932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.331939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.331953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.331961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.331970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.331976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.331985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.331992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.354 [2024-07-12 14:32:24.332413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.354 [2024-07-12 14:32:24.332423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.332985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.355 [2024-07-12 14:32:24.332994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.355 [2024-07-12 14:32:24.333002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.356 [2024-07-12 14:32:24.333479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.356 [2024-07-12 14:32:24.333495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.356 [2024-07-12 14:32:24.333510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.356 [2024-07-12 14:32:24.333519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.356 [2024-07-12 14:32:24.333526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.357 [2024-07-12 14:32:24.333537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.357 [2024-07-12 14:32:24.333544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.357 [2024-07-12 14:32:24.333552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.357 [2024-07-12 14:32:24.333558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.357 [2024-07-12 14:32:24.333566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.357 [2024-07-12 14:32:24.333573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.357 [2024-07-12 14:32:24.333581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.357 [2024-07-12 14:32:24.333588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.357 [2024-07-12 14:32:24.333596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.357 [2024-07-12 14:32:24.333602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.357 [2024-07-12 14:32:24.333611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.357 [2024-07-12 14:32:24.333617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.357 [2024-07-12 14:32:24.333625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.357 [2024-07-12 14:32:24.333632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.357 [2024-07-12 14:32:24.333640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.357 [2024-07-12 14:32:24.333646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.357 [2024-07-12 14:32:24.333654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.357 [2024-07-12 14:32:24.333661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.357 [2024-07-12 14:32:24.333669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.357 [2024-07-12 14:32:24.333675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.357 [2024-07-12 14:32:24.333683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.357 [2024-07-12 14:32:24.333690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.357 [2024-07-12 14:32:24.333698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.357 [2024-07-12 14:32:24.333704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.357 [2024-07-12 14:32:24.333715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.357 [2024-07-12 14:32:24.333721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.357 [2024-07-12 14:32:24.333730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86c70 is same with the state(5) to be set 00:27:32.357 [2024-07-12 14:32:24.333738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:32.357 [2024-07-12 14:32:24.333743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:32.357 [2024-07-12 14:32:24.333749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95008 len:8 PRP1 0x0 PRP2 0x0 00:27:32.357 [2024-07-12 14:32:24.333757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.357 [2024-07-12 14:32:24.333798] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d86c70 was disconnected and freed. reset controller. 00:27:32.357 [2024-07-12 14:32:24.336658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.357 [2024-07-12 14:32:24.336706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.357 [2024-07-12 14:32:24.337321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.357 [2024-07-12 14:32:24.337336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.357 [2024-07-12 14:32:24.337344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.357 [2024-07-12 14:32:24.337534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.357 [2024-07-12 14:32:24.337719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.357 [2024-07-12 14:32:24.337728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.357 [2024-07-12 14:32:24.337735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.357 [2024-07-12 14:32:24.340669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.357 [2024-07-12 14:32:24.350179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.357 [2024-07-12 14:32:24.350643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.357 [2024-07-12 14:32:24.350660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.357 [2024-07-12 14:32:24.350668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.357 [2024-07-12 14:32:24.350864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.357 [2024-07-12 14:32:24.351058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.357 [2024-07-12 14:32:24.351067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.357 [2024-07-12 14:32:24.351074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.357 [2024-07-12 14:32:24.354058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.616 [2024-07-12 14:32:24.363262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.616 [2024-07-12 14:32:24.363691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.616 [2024-07-12 14:32:24.363707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.616 [2024-07-12 14:32:24.363715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.616 [2024-07-12 14:32:24.363897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.616 [2024-07-12 14:32:24.364075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.616 [2024-07-12 14:32:24.364083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.616 [2024-07-12 14:32:24.364090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.616 [2024-07-12 14:32:24.366947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.616 [2024-07-12 14:32:24.376346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.616 [2024-07-12 14:32:24.376724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.616 [2024-07-12 14:32:24.376740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.616 [2024-07-12 14:32:24.376747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.616 [2024-07-12 14:32:24.376924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.616 [2024-07-12 14:32:24.377101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.616 [2024-07-12 14:32:24.377109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.616 [2024-07-12 14:32:24.377115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.616 [2024-07-12 14:32:24.379963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.616 [2024-07-12 14:32:24.389537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.616 [2024-07-12 14:32:24.389917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.616 [2024-07-12 14:32:24.389959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.616 [2024-07-12 14:32:24.389981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.616 [2024-07-12 14:32:24.390481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.616 [2024-07-12 14:32:24.390660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.616 [2024-07-12 14:32:24.390668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.616 [2024-07-12 14:32:24.390674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.616 [2024-07-12 14:32:24.393511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.616 [2024-07-12 14:32:24.402588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.616 [2024-07-12 14:32:24.403025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.616 [2024-07-12 14:32:24.403041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.616 [2024-07-12 14:32:24.403048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.616 [2024-07-12 14:32:24.403226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.616 [2024-07-12 14:32:24.403412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.616 [2024-07-12 14:32:24.403420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.616 [2024-07-12 14:32:24.403426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.616 [2024-07-12 14:32:24.406264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.616 [2024-07-12 14:32:24.415552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.616 [2024-07-12 14:32:24.415840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.616 [2024-07-12 14:32:24.415855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.616 [2024-07-12 14:32:24.415862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.616 [2024-07-12 14:32:24.416034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.616 [2024-07-12 14:32:24.416206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.616 [2024-07-12 14:32:24.416213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.616 [2024-07-12 14:32:24.416219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.616 [2024-07-12 14:32:24.418944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.616 [2024-07-12 14:32:24.428358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.616 [2024-07-12 14:32:24.428712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.616 [2024-07-12 14:32:24.428727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.616 [2024-07-12 14:32:24.428733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.616 [2024-07-12 14:32:24.428896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.616 [2024-07-12 14:32:24.429058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.616 [2024-07-12 14:32:24.429065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.616 [2024-07-12 14:32:24.429071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.616 [2024-07-12 14:32:24.431774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.616 [2024-07-12 14:32:24.441287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.616 [2024-07-12 14:32:24.441733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.616 [2024-07-12 14:32:24.441774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.616 [2024-07-12 14:32:24.441796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.616 [2024-07-12 14:32:24.442374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.616 [2024-07-12 14:32:24.442584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.616 [2024-07-12 14:32:24.442591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.616 [2024-07-12 14:32:24.442597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.616 [2024-07-12 14:32:24.445281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.616 [2024-07-12 14:32:24.454186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.616 [2024-07-12 14:32:24.454643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.616 [2024-07-12 14:32:24.454696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.616 [2024-07-12 14:32:24.454718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.616 [2024-07-12 14:32:24.455297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.616 [2024-07-12 14:32:24.455766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.616 [2024-07-12 14:32:24.455774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.616 [2024-07-12 14:32:24.455780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.616 [2024-07-12 14:32:24.458476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.616 [2024-07-12 14:32:24.467057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.616 [2024-07-12 14:32:24.467497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.616 [2024-07-12 14:32:24.467540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.616 [2024-07-12 14:32:24.467561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.616 [2024-07-12 14:32:24.468139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.616 [2024-07-12 14:32:24.468734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.616 [2024-07-12 14:32:24.468761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.616 [2024-07-12 14:32:24.468767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.616 [2024-07-12 14:32:24.471543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.616 [2024-07-12 14:32:24.479989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.616 [2024-07-12 14:32:24.480450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.616 [2024-07-12 14:32:24.480493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.616 [2024-07-12 14:32:24.480514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.616 [2024-07-12 14:32:24.480955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.616 [2024-07-12 14:32:24.481119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.616 [2024-07-12 14:32:24.481126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.616 [2024-07-12 14:32:24.481132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.616 [2024-07-12 14:32:24.483833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.616 [2024-07-12 14:32:24.492787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.616 [2024-07-12 14:32:24.493191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.616 [2024-07-12 14:32:24.493206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.616 [2024-07-12 14:32:24.493213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.616 [2024-07-12 14:32:24.493382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.616 [2024-07-12 14:32:24.493572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.616 [2024-07-12 14:32:24.493580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.616 [2024-07-12 14:32:24.493586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.616 [2024-07-12 14:32:24.496270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.616 [2024-07-12 14:32:24.505673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.616 [2024-07-12 14:32:24.506129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.616 [2024-07-12 14:32:24.506145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.616 [2024-07-12 14:32:24.506151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.616 [2024-07-12 14:32:24.506324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.616 [2024-07-12 14:32:24.506502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.616 [2024-07-12 14:32:24.506510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.616 [2024-07-12 14:32:24.506516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.616 [2024-07-12 14:32:24.509203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.616 [2024-07-12 14:32:24.518570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.616 [2024-07-12 14:32:24.519038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.616 [2024-07-12 14:32:24.519079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.616 [2024-07-12 14:32:24.519100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.616 [2024-07-12 14:32:24.519645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.616 [2024-07-12 14:32:24.519824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.616 [2024-07-12 14:32:24.519832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.616 [2024-07-12 14:32:24.519838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.616 [2024-07-12 14:32:24.522645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.616 [2024-07-12 14:32:24.531464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.616 [2024-07-12 14:32:24.531889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.616 [2024-07-12 14:32:24.531903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.616 [2024-07-12 14:32:24.531910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.616 [2024-07-12 14:32:24.532072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.616 [2024-07-12 14:32:24.532234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.616 [2024-07-12 14:32:24.532242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.616 [2024-07-12 14:32:24.532247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.616 [2024-07-12 14:32:24.534948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.616 [2024-07-12 14:32:24.544365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.616 [2024-07-12 14:32:24.544817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.616 [2024-07-12 14:32:24.544833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.616 [2024-07-12 14:32:24.544839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.616 [2024-07-12 14:32:24.545011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.616 [2024-07-12 14:32:24.545183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.616 [2024-07-12 14:32:24.545191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.616 [2024-07-12 14:32:24.545197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.616 [2024-07-12 14:32:24.547888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.616 [2024-07-12 14:32:24.557298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.617 [2024-07-12 14:32:24.557747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.617 [2024-07-12 14:32:24.557763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.617 [2024-07-12 14:32:24.557769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.617 [2024-07-12 14:32:24.557940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.617 [2024-07-12 14:32:24.558112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.617 [2024-07-12 14:32:24.558119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.617 [2024-07-12 14:32:24.558125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.617 [2024-07-12 14:32:24.560827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.617 [2024-07-12 14:32:24.570235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.617 [2024-07-12 14:32:24.570701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.617 [2024-07-12 14:32:24.570742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.617 [2024-07-12 14:32:24.570763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.617 [2024-07-12 14:32:24.571157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.617 [2024-07-12 14:32:24.571330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.617 [2024-07-12 14:32:24.571338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.617 [2024-07-12 14:32:24.571344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.617 [2024-07-12 14:32:24.574134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.617 [2024-07-12 14:32:24.583047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.617 [2024-07-12 14:32:24.583503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.617 [2024-07-12 14:32:24.583519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.617 [2024-07-12 14:32:24.583540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.617 [2024-07-12 14:32:24.583712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.617 [2024-07-12 14:32:24.583884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.617 [2024-07-12 14:32:24.583892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.617 [2024-07-12 14:32:24.583898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.617 [2024-07-12 14:32:24.586737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.617 [2024-07-12 14:32:24.596111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.617 [2024-07-12 14:32:24.596456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.617 [2024-07-12 14:32:24.596472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.617 [2024-07-12 14:32:24.596479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.617 [2024-07-12 14:32:24.596656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.617 [2024-07-12 14:32:24.596833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.617 [2024-07-12 14:32:24.596841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.617 [2024-07-12 14:32:24.596847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.617 [2024-07-12 14:32:24.599682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.617 [2024-07-12 14:32:24.609138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.617 [2024-07-12 14:32:24.609465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.617 [2024-07-12 14:32:24.609481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.617 [2024-07-12 14:32:24.609487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.617 [2024-07-12 14:32:24.609660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.617 [2024-07-12 14:32:24.609831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.617 [2024-07-12 14:32:24.609839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.617 [2024-07-12 14:32:24.609845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.617 [2024-07-12 14:32:24.612597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.617 [2024-07-12 14:32:24.622140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.617 [2024-07-12 14:32:24.622598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.617 [2024-07-12 14:32:24.622613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.617 [2024-07-12 14:32:24.622620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.617 [2024-07-12 14:32:24.622797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.617 [2024-07-12 14:32:24.622982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.875 [2024-07-12 14:32:24.622994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.875 [2024-07-12 14:32:24.623001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.875 [2024-07-12 14:32:24.625790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.875 [2024-07-12 14:32:24.634996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.875 [2024-07-12 14:32:24.635418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.875 [2024-07-12 14:32:24.635433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.875 [2024-07-12 14:32:24.635440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.875 [2024-07-12 14:32:24.635602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.875 [2024-07-12 14:32:24.635765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.875 [2024-07-12 14:32:24.635772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.875 [2024-07-12 14:32:24.635778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.875 [2024-07-12 14:32:24.638477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.875 [2024-07-12 14:32:24.647884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.875 [2024-07-12 14:32:24.648360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.875 [2024-07-12 14:32:24.648414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.875 [2024-07-12 14:32:24.648436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.875 [2024-07-12 14:32:24.648974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.875 [2024-07-12 14:32:24.649146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.875 [2024-07-12 14:32:24.649154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.875 [2024-07-12 14:32:24.649160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.875 [2024-07-12 14:32:24.651909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.875 [2024-07-12 14:32:24.660716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.875 [2024-07-12 14:32:24.661079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.875 [2024-07-12 14:32:24.661094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.875 [2024-07-12 14:32:24.661101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.875 [2024-07-12 14:32:24.661272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.875 [2024-07-12 14:32:24.661449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.875 [2024-07-12 14:32:24.661458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.875 [2024-07-12 14:32:24.661464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.875 [2024-07-12 14:32:24.664151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.875 [2024-07-12 14:32:24.673635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.875 [2024-07-12 14:32:24.674067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.875 [2024-07-12 14:32:24.674082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.875 [2024-07-12 14:32:24.674088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.875 [2024-07-12 14:32:24.674260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.875 [2024-07-12 14:32:24.674437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.875 [2024-07-12 14:32:24.674446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.875 [2024-07-12 14:32:24.674451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.875 [2024-07-12 14:32:24.677134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.875 [2024-07-12 14:32:24.686520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.875 [2024-07-12 14:32:24.686868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.875 [2024-07-12 14:32:24.686884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.875 [2024-07-12 14:32:24.686890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.875 [2024-07-12 14:32:24.687062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.875 [2024-07-12 14:32:24.687234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.876 [2024-07-12 14:32:24.687242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.876 [2024-07-12 14:32:24.687248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.876 [2024-07-12 14:32:24.689936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.876 [2024-07-12 14:32:24.699401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.876 [2024-07-12 14:32:24.699744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.876 [2024-07-12 14:32:24.699759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.876 [2024-07-12 14:32:24.699766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.876 [2024-07-12 14:32:24.699938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.876 [2024-07-12 14:32:24.700109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.876 [2024-07-12 14:32:24.700117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.876 [2024-07-12 14:32:24.700123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.876 [2024-07-12 14:32:24.702823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.876 [2024-07-12 14:32:24.712332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.876 [2024-07-12 14:32:24.712807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.876 [2024-07-12 14:32:24.712849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.876 [2024-07-12 14:32:24.712870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.876 [2024-07-12 14:32:24.713349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.876 [2024-07-12 14:32:24.713525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.876 [2024-07-12 14:32:24.713534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.876 [2024-07-12 14:32:24.713540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.876 [2024-07-12 14:32:24.716221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.876 [2024-07-12 14:32:24.725228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.876 [2024-07-12 14:32:24.725649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.876 [2024-07-12 14:32:24.725664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.876 [2024-07-12 14:32:24.725671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.876 [2024-07-12 14:32:24.725843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.876 [2024-07-12 14:32:24.726015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.876 [2024-07-12 14:32:24.726023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.876 [2024-07-12 14:32:24.726029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.876 [2024-07-12 14:32:24.728732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.876 [2024-07-12 14:32:24.738144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.876 [2024-07-12 14:32:24.738599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.876 [2024-07-12 14:32:24.738642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.876 [2024-07-12 14:32:24.738663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.876 [2024-07-12 14:32:24.739180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.876 [2024-07-12 14:32:24.739342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.876 [2024-07-12 14:32:24.739350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.876 [2024-07-12 14:32:24.739355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.876 [2024-07-12 14:32:24.742059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.876 [2024-07-12 14:32:24.750975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.876 [2024-07-12 14:32:24.751410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.876 [2024-07-12 14:32:24.751451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.876 [2024-07-12 14:32:24.751473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.876 [2024-07-12 14:32:24.752051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.876 [2024-07-12 14:32:24.752362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.876 [2024-07-12 14:32:24.752370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.876 [2024-07-12 14:32:24.752384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.876 [2024-07-12 14:32:24.755090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.876 [2024-07-12 14:32:24.763864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.876 [2024-07-12 14:32:24.764305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.876 [2024-07-12 14:32:24.764345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.876 [2024-07-12 14:32:24.764366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.876 [2024-07-12 14:32:24.764957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.876 [2024-07-12 14:32:24.765459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.876 [2024-07-12 14:32:24.765467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.876 [2024-07-12 14:32:24.765474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.876 [2024-07-12 14:32:24.768158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.876 [2024-07-12 14:32:24.776795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.876 [2024-07-12 14:32:24.777222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.876 [2024-07-12 14:32:24.777237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.876 [2024-07-12 14:32:24.777244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.876 [2024-07-12 14:32:24.777422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.876 [2024-07-12 14:32:24.777593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.876 [2024-07-12 14:32:24.777601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.876 [2024-07-12 14:32:24.777607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.876 [2024-07-12 14:32:24.780291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.876 [2024-07-12 14:32:24.789654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.876 [2024-07-12 14:32:24.790122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.876 [2024-07-12 14:32:24.790163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.876 [2024-07-12 14:32:24.790184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.876 [2024-07-12 14:32:24.790663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.876 [2024-07-12 14:32:24.790836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.876 [2024-07-12 14:32:24.790844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.876 [2024-07-12 14:32:24.790850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.876 [2024-07-12 14:32:24.793537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.876 [2024-07-12 14:32:24.802679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.876 [2024-07-12 14:32:24.803156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.876 [2024-07-12 14:32:24.803196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.876 [2024-07-12 14:32:24.803217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.876 [2024-07-12 14:32:24.803745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.876 [2024-07-12 14:32:24.803919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.876 [2024-07-12 14:32:24.803927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.876 [2024-07-12 14:32:24.803932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.876 [2024-07-12 14:32:24.806618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.876 [2024-07-12 14:32:24.815758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.876 [2024-07-12 14:32:24.816199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.876 [2024-07-12 14:32:24.816247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.876 [2024-07-12 14:32:24.816268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.876 [2024-07-12 14:32:24.816865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.876 [2024-07-12 14:32:24.817455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.876 [2024-07-12 14:32:24.817480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.876 [2024-07-12 14:32:24.817499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.876 [2024-07-12 14:32:24.820317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.876 [2024-07-12 14:32:24.828610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.876 [2024-07-12 14:32:24.829064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.876 [2024-07-12 14:32:24.829079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.876 [2024-07-12 14:32:24.829085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.876 [2024-07-12 14:32:24.829256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.876 [2024-07-12 14:32:24.829434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.876 [2024-07-12 14:32:24.829442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.876 [2024-07-12 14:32:24.829448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.876 [2024-07-12 14:32:24.832177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.876 [2024-07-12 14:32:24.841719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.876 [2024-07-12 14:32:24.842166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.876 [2024-07-12 14:32:24.842181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.876 [2024-07-12 14:32:24.842188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.876 [2024-07-12 14:32:24.842365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.876 [2024-07-12 14:32:24.842553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.876 [2024-07-12 14:32:24.842562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.876 [2024-07-12 14:32:24.842568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.876 [2024-07-12 14:32:24.845364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.876 [2024-07-12 14:32:24.854613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.876 [2024-07-12 14:32:24.854955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.876 [2024-07-12 14:32:24.854969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.876 [2024-07-12 14:32:24.854976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.876 [2024-07-12 14:32:24.855147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.876 [2024-07-12 14:32:24.855320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.876 [2024-07-12 14:32:24.855328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.876 [2024-07-12 14:32:24.855333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.876 [2024-07-12 14:32:24.858021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.876 [2024-07-12 14:32:24.867419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.876 [2024-07-12 14:32:24.867867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.876 [2024-07-12 14:32:24.867883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.876 [2024-07-12 14:32:24.867889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.876 [2024-07-12 14:32:24.868060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.876 [2024-07-12 14:32:24.868237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.876 [2024-07-12 14:32:24.868245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.876 [2024-07-12 14:32:24.868251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.876 [2024-07-12 14:32:24.870944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.876 [2024-07-12 14:32:24.880509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:32.876 [2024-07-12 14:32:24.880935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.876 [2024-07-12 14:32:24.880950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:32.876 [2024-07-12 14:32:24.880956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:32.876 [2024-07-12 14:32:24.881128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:32.876 [2024-07-12 14:32:24.881300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.876 [2024-07-12 14:32:24.881307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.876 [2024-07-12 14:32:24.881313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.135 [2024-07-12 14:32:24.884132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.135 [2024-07-12 14:32:24.893423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.135 [2024-07-12 14:32:24.893842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.135 [2024-07-12 14:32:24.893858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.135 [2024-07-12 14:32:24.893864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.135 [2024-07-12 14:32:24.894036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.135 [2024-07-12 14:32:24.894211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.135 [2024-07-12 14:32:24.894219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.135 [2024-07-12 14:32:24.894225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.135 [2024-07-12 14:32:24.896918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.135 [2024-07-12 14:32:24.906323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.135 [2024-07-12 14:32:24.906753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.135 [2024-07-12 14:32:24.906768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.135 [2024-07-12 14:32:24.906774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.135 [2024-07-12 14:32:24.906936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.135 [2024-07-12 14:32:24.907098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.135 [2024-07-12 14:32:24.907106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.135 [2024-07-12 14:32:24.907111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.135 [2024-07-12 14:32:24.909811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.136 [2024-07-12 14:32:24.919218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.136 [2024-07-12 14:32:24.919561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-07-12 14:32:24.919576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.136 [2024-07-12 14:32:24.919583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.136 [2024-07-12 14:32:24.919755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.136 [2024-07-12 14:32:24.919927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.136 [2024-07-12 14:32:24.919935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.136 [2024-07-12 14:32:24.919941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.136 [2024-07-12 14:32:24.922678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.136 [2024-07-12 14:32:24.932214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.136 [2024-07-12 14:32:24.932659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-07-12 14:32:24.932674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.136 [2024-07-12 14:32:24.932683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.136 [2024-07-12 14:32:24.932856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.136 [2024-07-12 14:32:24.933032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.136 [2024-07-12 14:32:24.933039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.136 [2024-07-12 14:32:24.933045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.136 [2024-07-12 14:32:24.935748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.136 [2024-07-12 14:32:24.945003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.136 [2024-07-12 14:32:24.945458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-07-12 14:32:24.945473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.136 [2024-07-12 14:32:24.945480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.136 [2024-07-12 14:32:24.945652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.136 [2024-07-12 14:32:24.945828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.136 [2024-07-12 14:32:24.945836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.136 [2024-07-12 14:32:24.945842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.136 [2024-07-12 14:32:24.948554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.136 [2024-07-12 14:32:24.957835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.136 [2024-07-12 14:32:24.958275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-07-12 14:32:24.958316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.136 [2024-07-12 14:32:24.958337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.136 [2024-07-12 14:32:24.958836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.136 [2024-07-12 14:32:24.959010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.136 [2024-07-12 14:32:24.959018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.136 [2024-07-12 14:32:24.959024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.136 [2024-07-12 14:32:24.961749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.136 [2024-07-12 14:32:24.970768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.136 [2024-07-12 14:32:24.971196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-07-12 14:32:24.971211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.136 [2024-07-12 14:32:24.971217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.136 [2024-07-12 14:32:24.971387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.136 [2024-07-12 14:32:24.971577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.136 [2024-07-12 14:32:24.971585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.136 [2024-07-12 14:32:24.971591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.136 [2024-07-12 14:32:24.974328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.136 [2024-07-12 14:32:24.983685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.136 [2024-07-12 14:32:24.984109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-07-12 14:32:24.984123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.136 [2024-07-12 14:32:24.984130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.136 [2024-07-12 14:32:24.984293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.136 [2024-07-12 14:32:24.984482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.136 [2024-07-12 14:32:24.984491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.136 [2024-07-12 14:32:24.984497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.136 [2024-07-12 14:32:24.987183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.136 [2024-07-12 14:32:24.996543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.136 [2024-07-12 14:32:24.996972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-07-12 14:32:24.996987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.136 [2024-07-12 14:32:24.996994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.136 [2024-07-12 14:32:24.997166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.136 [2024-07-12 14:32:24.997339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.136 [2024-07-12 14:32:24.997346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.136 [2024-07-12 14:32:24.997352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.136 [2024-07-12 14:32:25.000046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.136 [2024-07-12 14:32:25.009482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.136 [2024-07-12 14:32:25.009913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-07-12 14:32:25.009955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.136 [2024-07-12 14:32:25.009977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.136 [2024-07-12 14:32:25.010568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.136 [2024-07-12 14:32:25.011107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.136 [2024-07-12 14:32:25.011115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.136 [2024-07-12 14:32:25.011121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.136 [2024-07-12 14:32:25.013810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.136 [2024-07-12 14:32:25.022317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.136 [2024-07-12 14:32:25.022750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-07-12 14:32:25.022765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.136 [2024-07-12 14:32:25.022771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.136 [2024-07-12 14:32:25.022958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.136 [2024-07-12 14:32:25.023132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.136 [2024-07-12 14:32:25.023140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.136 [2024-07-12 14:32:25.023146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.136 [2024-07-12 14:32:25.025887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.136 [2024-07-12 14:32:25.035135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.136 [2024-07-12 14:32:25.035561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-07-12 14:32:25.035576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.136 [2024-07-12 14:32:25.035582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.136 [2024-07-12 14:32:25.035746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.136 [2024-07-12 14:32:25.035909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.136 [2024-07-12 14:32:25.035916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.136 [2024-07-12 14:32:25.035922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.136 [2024-07-12 14:32:25.038620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.136 [2024-07-12 14:32:25.048016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.136 [2024-07-12 14:32:25.048442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.136 [2024-07-12 14:32:25.048457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.136 [2024-07-12 14:32:25.048463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.136 [2024-07-12 14:32:25.048625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.136 [2024-07-12 14:32:25.048787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.136 [2024-07-12 14:32:25.048795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.136 [2024-07-12 14:32:25.048801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.136 [2024-07-12 14:32:25.051472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.137 [2024-07-12 14:32:25.060833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.137 [2024-07-12 14:32:25.061275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-12 14:32:25.061321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-07-12 14:32:25.061350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.137 [2024-07-12 14:32:25.061921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.137 [2024-07-12 14:32:25.062099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.137 [2024-07-12 14:32:25.062107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.137 [2024-07-12 14:32:25.062113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.137 [2024-07-12 14:32:25.064822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.137 [2024-07-12 14:32:25.073767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.137 [2024-07-12 14:32:25.074146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-12 14:32:25.074161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-07-12 14:32:25.074180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.137 [2024-07-12 14:32:25.074343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.137 [2024-07-12 14:32:25.074534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.137 [2024-07-12 14:32:25.074543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.137 [2024-07-12 14:32:25.074548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.137 [2024-07-12 14:32:25.077231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.137 [2024-07-12 14:32:25.086797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.137 [2024-07-12 14:32:25.087238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-12 14:32:25.087279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-07-12 14:32:25.087300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.137 [2024-07-12 14:32:25.087894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.137 [2024-07-12 14:32:25.088457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.137 [2024-07-12 14:32:25.088466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.137 [2024-07-12 14:32:25.088474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.137 [2024-07-12 14:32:25.091201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.137 [2024-07-12 14:32:25.099793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.137 [2024-07-12 14:32:25.100189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-12 14:32:25.100205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-07-12 14:32:25.100212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.137 [2024-07-12 14:32:25.100394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.137 [2024-07-12 14:32:25.100571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.137 [2024-07-12 14:32:25.100582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.137 [2024-07-12 14:32:25.100588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.137 [2024-07-12 14:32:25.103429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.137 [2024-07-12 14:32:25.112919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.137 [2024-07-12 14:32:25.113305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-12 14:32:25.113345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-07-12 14:32:25.113367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.137 [2024-07-12 14:32:25.113956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.137 [2024-07-12 14:32:25.114467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.137 [2024-07-12 14:32:25.114475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.137 [2024-07-12 14:32:25.114481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.137 [2024-07-12 14:32:25.117247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.137 [2024-07-12 14:32:25.125952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.137 [2024-07-12 14:32:25.126326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-12 14:32:25.126366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-07-12 14:32:25.126406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.137 [2024-07-12 14:32:25.126988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.137 [2024-07-12 14:32:25.127453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.137 [2024-07-12 14:32:25.127461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.137 [2024-07-12 14:32:25.127467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.137 [2024-07-12 14:32:25.130231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.137 [2024-07-12 14:32:25.138795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.137 [2024-07-12 14:32:25.139170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.137 [2024-07-12 14:32:25.139186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.137 [2024-07-12 14:32:25.139193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.137 [2024-07-12 14:32:25.139369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.137 [2024-07-12 14:32:25.139551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.137 [2024-07-12 14:32:25.139560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.137 [2024-07-12 14:32:25.139566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.137 [2024-07-12 14:32:25.142327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.399 [2024-07-12 14:32:25.151741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.399 [2024-07-12 14:32:25.152217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.399 [2024-07-12 14:32:25.152258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.399 [2024-07-12 14:32:25.152280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.399 [2024-07-12 14:32:25.152872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.399 [2024-07-12 14:32:25.153409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.399 [2024-07-12 14:32:25.153417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.399 [2024-07-12 14:32:25.153423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.399 [2024-07-12 14:32:25.156104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.399 [2024-07-12 14:32:25.164662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.399 [2024-07-12 14:32:25.165111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.399 [2024-07-12 14:32:25.165126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.399 [2024-07-12 14:32:25.165133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.399 [2024-07-12 14:32:25.165305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.399 [2024-07-12 14:32:25.165481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.399 [2024-07-12 14:32:25.165489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.399 [2024-07-12 14:32:25.165495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.399 [2024-07-12 14:32:25.168190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.399 [2024-07-12 14:32:25.177539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.399 [2024-07-12 14:32:25.178003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.399 [2024-07-12 14:32:25.178043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.399 [2024-07-12 14:32:25.178065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.399 [2024-07-12 14:32:25.178524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.399 [2024-07-12 14:32:25.178697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.399 [2024-07-12 14:32:25.178705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.399 [2024-07-12 14:32:25.178711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.399 [2024-07-12 14:32:25.181404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.399 [2024-07-12 14:32:25.190459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.399 [2024-07-12 14:32:25.190808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.399 [2024-07-12 14:32:25.190824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.399 [2024-07-12 14:32:25.190831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.399 [2024-07-12 14:32:25.191007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.399 [2024-07-12 14:32:25.191180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.399 [2024-07-12 14:32:25.191188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.399 [2024-07-12 14:32:25.191194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.399 [2024-07-12 14:32:25.193885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.399 [2024-07-12 14:32:25.203296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.399 [2024-07-12 14:32:25.203748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.399 [2024-07-12 14:32:25.203791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.399 [2024-07-12 14:32:25.203813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.399 [2024-07-12 14:32:25.204355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.399 [2024-07-12 14:32:25.204534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.399 [2024-07-12 14:32:25.204542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.399 [2024-07-12 14:32:25.204548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.399 [2024-07-12 14:32:25.207234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.399 [2024-07-12 14:32:25.216144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.399 [2024-07-12 14:32:25.216508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.399 [2024-07-12 14:32:25.216524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.399 [2024-07-12 14:32:25.216543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.399 [2024-07-12 14:32:25.216705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.399 [2024-07-12 14:32:25.216867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.399 [2024-07-12 14:32:25.216875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.399 [2024-07-12 14:32:25.216881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.399 [2024-07-12 14:32:25.219581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.399 [2024-07-12 14:32:25.228987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.400 [2024-07-12 14:32:25.229418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.400 [2024-07-12 14:32:25.229432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.400 [2024-07-12 14:32:25.229439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.400 [2024-07-12 14:32:25.229602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.400 [2024-07-12 14:32:25.229763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.400 [2024-07-12 14:32:25.229771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.400 [2024-07-12 14:32:25.229780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.400 [2024-07-12 14:32:25.232483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.400 [2024-07-12 14:32:25.241894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.400 [2024-07-12 14:32:25.242343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.400 [2024-07-12 14:32:25.242395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.400 [2024-07-12 14:32:25.242419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.400 [2024-07-12 14:32:25.242997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.400 [2024-07-12 14:32:25.243587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.400 [2024-07-12 14:32:25.243612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.400 [2024-07-12 14:32:25.243632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.400 [2024-07-12 14:32:25.246375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.400 [2024-07-12 14:32:25.254820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.400 [2024-07-12 14:32:25.255256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.400 [2024-07-12 14:32:25.255297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.400 [2024-07-12 14:32:25.255319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.400 [2024-07-12 14:32:25.255731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.400 [2024-07-12 14:32:25.255904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.400 [2024-07-12 14:32:25.255911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.400 [2024-07-12 14:32:25.255917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.400 [2024-07-12 14:32:25.258614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.400 [2024-07-12 14:32:25.267654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.400 [2024-07-12 14:32:25.268056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.400 [2024-07-12 14:32:25.268071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.400 [2024-07-12 14:32:25.268078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.400 [2024-07-12 14:32:25.268240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.400 [2024-07-12 14:32:25.268423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.400 [2024-07-12 14:32:25.268432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.400 [2024-07-12 14:32:25.268438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.400 [2024-07-12 14:32:25.271118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.400 [2024-07-12 14:32:25.280558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.400 [2024-07-12 14:32:25.280966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.400 [2024-07-12 14:32:25.280984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.400 [2024-07-12 14:32:25.280991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.400 [2024-07-12 14:32:25.281153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.400 [2024-07-12 14:32:25.281315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.400 [2024-07-12 14:32:25.281322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.400 [2024-07-12 14:32:25.281328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.400 [2024-07-12 14:32:25.284031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.400 [2024-07-12 14:32:25.293419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.400 [2024-07-12 14:32:25.293826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.400 [2024-07-12 14:32:25.293867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.400 [2024-07-12 14:32:25.293889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.400 [2024-07-12 14:32:25.294481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.400 [2024-07-12 14:32:25.294908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.400 [2024-07-12 14:32:25.294916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.400 [2024-07-12 14:32:25.294922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.400 [2024-07-12 14:32:25.297572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.400 [2024-07-12 14:32:25.306203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.400 [2024-07-12 14:32:25.306610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.400 [2024-07-12 14:32:25.306626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.400 [2024-07-12 14:32:25.306633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.400 [2024-07-12 14:32:25.306804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.400 [2024-07-12 14:32:25.306976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.400 [2024-07-12 14:32:25.306983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.400 [2024-07-12 14:32:25.306989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.400 [2024-07-12 14:32:25.309690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.400 [2024-07-12 14:32:25.319058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.400 [2024-07-12 14:32:25.319500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.400 [2024-07-12 14:32:25.319515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.400 [2024-07-12 14:32:25.319521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.400 [2024-07-12 14:32:25.319683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.400 [2024-07-12 14:32:25.319849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.400 [2024-07-12 14:32:25.319856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.400 [2024-07-12 14:32:25.319862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.400 [2024-07-12 14:32:25.322562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.400 [2024-07-12 14:32:25.331893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.400 [2024-07-12 14:32:25.332305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.400 [2024-07-12 14:32:25.332319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.400 [2024-07-12 14:32:25.332325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.400 [2024-07-12 14:32:25.332502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.400 [2024-07-12 14:32:25.332675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.400 [2024-07-12 14:32:25.332683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.400 [2024-07-12 14:32:25.332689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.400 [2024-07-12 14:32:25.335442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.400 [2024-07-12 14:32:25.344875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.400 [2024-07-12 14:32:25.345290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.400 [2024-07-12 14:32:25.345304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.400 [2024-07-12 14:32:25.345311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.400 [2024-07-12 14:32:25.345490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.400 [2024-07-12 14:32:25.345664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.400 [2024-07-12 14:32:25.345672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.400 [2024-07-12 14:32:25.345678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.400 [2024-07-12 14:32:25.348532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.400 [2024-07-12 14:32:25.357876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.400 [2024-07-12 14:32:25.358304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.400 [2024-07-12 14:32:25.358319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.400 [2024-07-12 14:32:25.358326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.400 [2024-07-12 14:32:25.358513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.400 [2024-07-12 14:32:25.358690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.400 [2024-07-12 14:32:25.358698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.400 [2024-07-12 14:32:25.358704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.400 [2024-07-12 14:32:25.361545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.401 [2024-07-12 14:32:25.370876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.401 [2024-07-12 14:32:25.371295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.401 [2024-07-12 14:32:25.371310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.401 [2024-07-12 14:32:25.371317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.401 [2024-07-12 14:32:25.371500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.401 [2024-07-12 14:32:25.371686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.401 [2024-07-12 14:32:25.371693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.401 [2024-07-12 14:32:25.371699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.401 [2024-07-12 14:32:25.374467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.401 [2024-07-12 14:32:25.383933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.401 [2024-07-12 14:32:25.384342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.401 [2024-07-12 14:32:25.384357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.401 [2024-07-12 14:32:25.384364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.401 [2024-07-12 14:32:25.384573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.401 [2024-07-12 14:32:25.384748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.401 [2024-07-12 14:32:25.384756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.401 [2024-07-12 14:32:25.384761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.401 [2024-07-12 14:32:25.387614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.401 [2024-07-12 14:32:25.396998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.401 [2024-07-12 14:32:25.397438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.401 [2024-07-12 14:32:25.397454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.401 [2024-07-12 14:32:25.397461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.401 [2024-07-12 14:32:25.397639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.401 [2024-07-12 14:32:25.397817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.401 [2024-07-12 14:32:25.397825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.401 [2024-07-12 14:32:25.397831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.401 [2024-07-12 14:32:25.400671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.661 [2024-07-12 14:32:25.410091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.661 [2024-07-12 14:32:25.410505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.661 [2024-07-12 14:32:25.410521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.661 [2024-07-12 14:32:25.410531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.661 [2024-07-12 14:32:25.410709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.661 [2024-07-12 14:32:25.410886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.661 [2024-07-12 14:32:25.410894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.661 [2024-07-12 14:32:25.410900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.661 [2024-07-12 14:32:25.413745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.661 [2024-07-12 14:32:25.423139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.661 [2024-07-12 14:32:25.423555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.661 [2024-07-12 14:32:25.423570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.661 [2024-07-12 14:32:25.423577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.661 [2024-07-12 14:32:25.423755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.661 [2024-07-12 14:32:25.423933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.661 [2024-07-12 14:32:25.423942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.661 [2024-07-12 14:32:25.423948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.661 [2024-07-12 14:32:25.426789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.661 [2024-07-12 14:32:25.436338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.661 [2024-07-12 14:32:25.436734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.661 [2024-07-12 14:32:25.436750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.661 [2024-07-12 14:32:25.436757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.661 [2024-07-12 14:32:25.436933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.661 [2024-07-12 14:32:25.437111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.661 [2024-07-12 14:32:25.437120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.661 [2024-07-12 14:32:25.437126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.661 [2024-07-12 14:32:25.439966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.661 [2024-07-12 14:32:25.449587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.661 [2024-07-12 14:32:25.449986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.661 [2024-07-12 14:32:25.450001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.661 [2024-07-12 14:32:25.450008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.661 [2024-07-12 14:32:25.450186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.661 [2024-07-12 14:32:25.450363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.661 [2024-07-12 14:32:25.450374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.661 [2024-07-12 14:32:25.450386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.661 [2024-07-12 14:32:25.453219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.661 [2024-07-12 14:32:25.462779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.661 [2024-07-12 14:32:25.463196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.661 [2024-07-12 14:32:25.463211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.661 [2024-07-12 14:32:25.463218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.661 [2024-07-12 14:32:25.463401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.661 [2024-07-12 14:32:25.463584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.661 [2024-07-12 14:32:25.463592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.661 [2024-07-12 14:32:25.463598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.661 [2024-07-12 14:32:25.466435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.661 [2024-07-12 14:32:25.475838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.661 [2024-07-12 14:32:25.476291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.661 [2024-07-12 14:32:25.476333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.661 [2024-07-12 14:32:25.476355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.661 [2024-07-12 14:32:25.476950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.661 [2024-07-12 14:32:25.477129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.661 [2024-07-12 14:32:25.477137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.661 [2024-07-12 14:32:25.477143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.661 [2024-07-12 14:32:25.479989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.661 [2024-07-12 14:32:25.489060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.661 [2024-07-12 14:32:25.489539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.661 [2024-07-12 14:32:25.489555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.661 [2024-07-12 14:32:25.489562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.661 [2024-07-12 14:32:25.489739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.662 [2024-07-12 14:32:25.489915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.662 [2024-07-12 14:32:25.489923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.662 [2024-07-12 14:32:25.489929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.662 [2024-07-12 14:32:25.492778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.662 [2024-07-12 14:32:25.502048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.662 [2024-07-12 14:32:25.502485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.662 [2024-07-12 14:32:25.502501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.662 [2024-07-12 14:32:25.502509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.662 [2024-07-12 14:32:25.502681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.662 [2024-07-12 14:32:25.502854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.662 [2024-07-12 14:32:25.502862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.662 [2024-07-12 14:32:25.502868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.662 [2024-07-12 14:32:25.505674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.662 [2024-07-12 14:32:25.515174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.662 [2024-07-12 14:32:25.515631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.662 [2024-07-12 14:32:25.515685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.662 [2024-07-12 14:32:25.515706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.662 [2024-07-12 14:32:25.516257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.662 [2024-07-12 14:32:25.516453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.662 [2024-07-12 14:32:25.516461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.662 [2024-07-12 14:32:25.516468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.662 [2024-07-12 14:32:25.519191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.662 [2024-07-12 14:32:25.528170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.662 [2024-07-12 14:32:25.528601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.662 [2024-07-12 14:32:25.528616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.662 [2024-07-12 14:32:25.528622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.662 [2024-07-12 14:32:25.528793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.662 [2024-07-12 14:32:25.528966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.662 [2024-07-12 14:32:25.528973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.662 [2024-07-12 14:32:25.528979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.662 [2024-07-12 14:32:25.531762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.662 [2024-07-12 14:32:25.541166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.662 [2024-07-12 14:32:25.541553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.662 [2024-07-12 14:32:25.541569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.662 [2024-07-12 14:32:25.541579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.662 [2024-07-12 14:32:25.541757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.662 [2024-07-12 14:32:25.541921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.662 [2024-07-12 14:32:25.541929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.662 [2024-07-12 14:32:25.541934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.662 [2024-07-12 14:32:25.544710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.662 [2024-07-12 14:32:25.554127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.662 [2024-07-12 14:32:25.554554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.662 [2024-07-12 14:32:25.554570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.662 [2024-07-12 14:32:25.554576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.662 [2024-07-12 14:32:25.554739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.662 [2024-07-12 14:32:25.554901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.662 [2024-07-12 14:32:25.554909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.662 [2024-07-12 14:32:25.554914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.662 [2024-07-12 14:32:25.557680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.662 [2024-07-12 14:32:25.567052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.662 [2024-07-12 14:32:25.567525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.662 [2024-07-12 14:32:25.567567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.662 [2024-07-12 14:32:25.567588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.662 [2024-07-12 14:32:25.568164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.662 [2024-07-12 14:32:25.568328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.662 [2024-07-12 14:32:25.568336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.662 [2024-07-12 14:32:25.568342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.662 [2024-07-12 14:32:25.571112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.662 [2024-07-12 14:32:25.580069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.662 [2024-07-12 14:32:25.580436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.662 [2024-07-12 14:32:25.580478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.662 [2024-07-12 14:32:25.580499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.662 [2024-07-12 14:32:25.581078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.662 [2024-07-12 14:32:25.581518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.662 [2024-07-12 14:32:25.581527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.662 [2024-07-12 14:32:25.581537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.662 [2024-07-12 14:32:25.584218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.662 [2024-07-12 14:32:25.592923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.662 [2024-07-12 14:32:25.593283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.662 [2024-07-12 14:32:25.593298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.662 [2024-07-12 14:32:25.593305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.662 [2024-07-12 14:32:25.593483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.662 [2024-07-12 14:32:25.593655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.662 [2024-07-12 14:32:25.593663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.662 [2024-07-12 14:32:25.593669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.662 [2024-07-12 14:32:25.596361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.662 [2024-07-12 14:32:25.605948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.662 [2024-07-12 14:32:25.606372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.662 [2024-07-12 14:32:25.606393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.662 [2024-07-12 14:32:25.606401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.662 [2024-07-12 14:32:25.606572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.662 [2024-07-12 14:32:25.606743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.662 [2024-07-12 14:32:25.606751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.662 [2024-07-12 14:32:25.606757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.662 [2024-07-12 14:32:25.609496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.662 [2024-07-12 14:32:25.619065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.662 [2024-07-12 14:32:25.619483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.662 [2024-07-12 14:32:25.619525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.662 [2024-07-12 14:32:25.619547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.662 [2024-07-12 14:32:25.620125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.662 [2024-07-12 14:32:25.620672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.662 [2024-07-12 14:32:25.620681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.662 [2024-07-12 14:32:25.620687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.662 [2024-07-12 14:32:25.624535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.662 [2024-07-12 14:32:25.632752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.662 [2024-07-12 14:32:25.633207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.662 [2024-07-12 14:32:25.633247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.663 [2024-07-12 14:32:25.633268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.663 [2024-07-12 14:32:25.633796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.663 [2024-07-12 14:32:25.633970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.663 [2024-07-12 14:32:25.633978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.663 [2024-07-12 14:32:25.633984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.663 [2024-07-12 14:32:25.636780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.663 [2024-07-12 14:32:25.645782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.663 [2024-07-12 14:32:25.646218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.663 [2024-07-12 14:32:25.646233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.663 [2024-07-12 14:32:25.646240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.663 [2024-07-12 14:32:25.646417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.663 [2024-07-12 14:32:25.646590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.663 [2024-07-12 14:32:25.646597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.663 [2024-07-12 14:32:25.646603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.663 [2024-07-12 14:32:25.649334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.663 [2024-07-12 14:32:25.658759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.663 [2024-07-12 14:32:25.659236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.663 [2024-07-12 14:32:25.659276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.663 [2024-07-12 14:32:25.659298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.663 [2024-07-12 14:32:25.659720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.663 [2024-07-12 14:32:25.659893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.663 [2024-07-12 14:32:25.659901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.663 [2024-07-12 14:32:25.659907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.663 [2024-07-12 14:32:25.662617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.922 [2024-07-12 14:32:25.671767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.922 [2024-07-12 14:32:25.672130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.922 [2024-07-12 14:32:25.672145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.922 [2024-07-12 14:32:25.672152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.922 [2024-07-12 14:32:25.672327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.922 [2024-07-12 14:32:25.672507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.922 [2024-07-12 14:32:25.672515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.922 [2024-07-12 14:32:25.672521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.922 [2024-07-12 14:32:25.675342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.922 [2024-07-12 14:32:25.684754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.923 [2024-07-12 14:32:25.685201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.923 [2024-07-12 14:32:25.685216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.923 [2024-07-12 14:32:25.685222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.923 [2024-07-12 14:32:25.685400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.923 [2024-07-12 14:32:25.685573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.923 [2024-07-12 14:32:25.685581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.923 [2024-07-12 14:32:25.685586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.923 [2024-07-12 14:32:25.688313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.923 [2024-07-12 14:32:25.697741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.923 [2024-07-12 14:32:25.698184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.923 [2024-07-12 14:32:25.698199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.923 [2024-07-12 14:32:25.698206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.923 [2024-07-12 14:32:25.698383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.923 [2024-07-12 14:32:25.698556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.923 [2024-07-12 14:32:25.698564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.923 [2024-07-12 14:32:25.698570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.923 [2024-07-12 14:32:25.701323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.923 [2024-07-12 14:32:25.710686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.923 [2024-07-12 14:32:25.711103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.923 [2024-07-12 14:32:25.711118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.923 [2024-07-12 14:32:25.711125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.923 [2024-07-12 14:32:25.711297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.923 [2024-07-12 14:32:25.711474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.923 [2024-07-12 14:32:25.711482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.923 [2024-07-12 14:32:25.711491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.923 [2024-07-12 14:32:25.714245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.923 [2024-07-12 14:32:25.723637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.923 [2024-07-12 14:32:25.723972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.923 [2024-07-12 14:32:25.723987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.923 [2024-07-12 14:32:25.723994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.923 [2024-07-12 14:32:25.724166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.923 [2024-07-12 14:32:25.724337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.923 [2024-07-12 14:32:25.724345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.923 [2024-07-12 14:32:25.724351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.923 [2024-07-12 14:32:25.727056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.923 [2024-07-12 14:32:25.736568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.923 [2024-07-12 14:32:25.737026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.923 [2024-07-12 14:32:25.737040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.923 [2024-07-12 14:32:25.737047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.923 [2024-07-12 14:32:25.737219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.923 [2024-07-12 14:32:25.737399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.923 [2024-07-12 14:32:25.737407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.923 [2024-07-12 14:32:25.737414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.923 [2024-07-12 14:32:25.740099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.923 [2024-07-12 14:32:25.749510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.923 [2024-07-12 14:32:25.749939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.923 [2024-07-12 14:32:25.749955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.923 [2024-07-12 14:32:25.749962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.923 [2024-07-12 14:32:25.750133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.923 [2024-07-12 14:32:25.750306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.923 [2024-07-12 14:32:25.750314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.923 [2024-07-12 14:32:25.750320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.923 [2024-07-12 14:32:25.753018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.923 [2024-07-12 14:32:25.762447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.923 [2024-07-12 14:32:25.762814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.923 [2024-07-12 14:32:25.762833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.923 [2024-07-12 14:32:25.762839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.923 [2024-07-12 14:32:25.763011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.923 [2024-07-12 14:32:25.763183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.923 [2024-07-12 14:32:25.763191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.923 [2024-07-12 14:32:25.763197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.923 [2024-07-12 14:32:25.765948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.923 [2024-07-12 14:32:25.775392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.923 [2024-07-12 14:32:25.775747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.923 [2024-07-12 14:32:25.775763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.923 [2024-07-12 14:32:25.775769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.923 [2024-07-12 14:32:25.775947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.923 [2024-07-12 14:32:25.776133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.923 [2024-07-12 14:32:25.776141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.923 [2024-07-12 14:32:25.776147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.923 [2024-07-12 14:32:25.778841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.923 [2024-07-12 14:32:25.788358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.923 [2024-07-12 14:32:25.788728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.923 [2024-07-12 14:32:25.788742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.923 [2024-07-12 14:32:25.788749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.923 [2024-07-12 14:32:25.788911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.923 [2024-07-12 14:32:25.789073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.923 [2024-07-12 14:32:25.789081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.923 [2024-07-12 14:32:25.789087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.923 [2024-07-12 14:32:25.791816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.923 [2024-07-12 14:32:25.801338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.923 [2024-07-12 14:32:25.801827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.923 [2024-07-12 14:32:25.801868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.923 [2024-07-12 14:32:25.801890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.923 [2024-07-12 14:32:25.802300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.923 [2024-07-12 14:32:25.802484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.923 [2024-07-12 14:32:25.802492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.923 [2024-07-12 14:32:25.802499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.923 [2024-07-12 14:32:25.805247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.923 [2024-07-12 14:32:25.814287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.923 [2024-07-12 14:32:25.814663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.923 [2024-07-12 14:32:25.814678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.923 [2024-07-12 14:32:25.814685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.923 [2024-07-12 14:32:25.814857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.923 [2024-07-12 14:32:25.815028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.923 [2024-07-12 14:32:25.815036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.923 [2024-07-12 14:32:25.815042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.923 [2024-07-12 14:32:25.817823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.923 [2024-07-12 14:32:25.827360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.924 [2024-07-12 14:32:25.827817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.924 [2024-07-12 14:32:25.827858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.924 [2024-07-12 14:32:25.827880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.924 [2024-07-12 14:32:25.828471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.924 [2024-07-12 14:32:25.829016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.924 [2024-07-12 14:32:25.829024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.924 [2024-07-12 14:32:25.829030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.924 [2024-07-12 14:32:25.831715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.924 [2024-07-12 14:32:25.840179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.924 [2024-07-12 14:32:25.840581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.924 [2024-07-12 14:32:25.840596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.924 [2024-07-12 14:32:25.840603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.924 [2024-07-12 14:32:25.840774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.924 [2024-07-12 14:32:25.840945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.924 [2024-07-12 14:32:25.840953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.924 [2024-07-12 14:32:25.840959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.924 [2024-07-12 14:32:25.843650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.924 [2024-07-12 14:32:25.853048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.924 [2024-07-12 14:32:25.853478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.924 [2024-07-12 14:32:25.853494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.924 [2024-07-12 14:32:25.853500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.924 [2024-07-12 14:32:25.853672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.924 [2024-07-12 14:32:25.853845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.924 [2024-07-12 14:32:25.853852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.924 [2024-07-12 14:32:25.853858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.924 [2024-07-12 14:32:25.856569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.924 [2024-07-12 14:32:25.865981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.924 [2024-07-12 14:32:25.866413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.924 [2024-07-12 14:32:25.866429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.924 [2024-07-12 14:32:25.866436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.924 [2024-07-12 14:32:25.866613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.924 [2024-07-12 14:32:25.866793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.924 [2024-07-12 14:32:25.866802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.924 [2024-07-12 14:32:25.866808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.924 [2024-07-12 14:32:25.869646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.924 [2024-07-12 14:32:25.879144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.924 [2024-07-12 14:32:25.879534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.924 [2024-07-12 14:32:25.879549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.924 [2024-07-12 14:32:25.879556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.924 [2024-07-12 14:32:25.879740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.924 [2024-07-12 14:32:25.879913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.924 [2024-07-12 14:32:25.879921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.924 [2024-07-12 14:32:25.879927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.924 [2024-07-12 14:32:25.882680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.924 [2024-07-12 14:32:25.892044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.924 [2024-07-12 14:32:25.892478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.924 [2024-07-12 14:32:25.892518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.924 [2024-07-12 14:32:25.892547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.924 [2024-07-12 14:32:25.893023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.924 [2024-07-12 14:32:25.893186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.924 [2024-07-12 14:32:25.893194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.924 [2024-07-12 14:32:25.893199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.924 [2024-07-12 14:32:25.895896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.924 [2024-07-12 14:32:25.904837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.924 [2024-07-12 14:32:25.905270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.924 [2024-07-12 14:32:25.905311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.924 [2024-07-12 14:32:25.905332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.924 [2024-07-12 14:32:25.905927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.924 [2024-07-12 14:32:25.906352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.924 [2024-07-12 14:32:25.906360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.924 [2024-07-12 14:32:25.906366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.924 [2024-07-12 14:32:25.909049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.924 [2024-07-12 14:32:25.917765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.924 [2024-07-12 14:32:25.918223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.924 [2024-07-12 14:32:25.918263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:33.924 [2024-07-12 14:32:25.918285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:33.924 [2024-07-12 14:32:25.918881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:33.924 [2024-07-12 14:32:25.919167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.924 [2024-07-12 14:32:25.919174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.924 [2024-07-12 14:32:25.919180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.924 [2024-07-12 14:32:25.921871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.184 [2024-07-12 14:32:25.930801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.184 [2024-07-12 14:32:25.931253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.184 [2024-07-12 14:32:25.931269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.184 [2024-07-12 14:32:25.931276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.184 [2024-07-12 14:32:25.931483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.184 [2024-07-12 14:32:25.931668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.184 [2024-07-12 14:32:25.931679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.184 [2024-07-12 14:32:25.931685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.184 [2024-07-12 14:32:25.934466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.184 [2024-07-12 14:32:25.943683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.184 [2024-07-12 14:32:25.944109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.184 [2024-07-12 14:32:25.944124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.184 [2024-07-12 14:32:25.944131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.184 [2024-07-12 14:32:25.944303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.184 [2024-07-12 14:32:25.944480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.184 [2024-07-12 14:32:25.944488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.184 [2024-07-12 14:32:25.944494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.184 [2024-07-12 14:32:25.947176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.184 [2024-07-12 14:32:25.956489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.184 [2024-07-12 14:32:25.956926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.184 [2024-07-12 14:32:25.956967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.184 [2024-07-12 14:32:25.956989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.184 [2024-07-12 14:32:25.957417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.184 [2024-07-12 14:32:25.957590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.184 [2024-07-12 14:32:25.957598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.184 [2024-07-12 14:32:25.957604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.184 [2024-07-12 14:32:25.960287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.184 [2024-07-12 14:32:25.969346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.184 [2024-07-12 14:32:25.969752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.184 [2024-07-12 14:32:25.969767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.184 [2024-07-12 14:32:25.969773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.184 [2024-07-12 14:32:25.969945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.184 [2024-07-12 14:32:25.970116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.184 [2024-07-12 14:32:25.970124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.184 [2024-07-12 14:32:25.970130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.184 [2024-07-12 14:32:25.972829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.184 [2024-07-12 14:32:25.982233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.184 [2024-07-12 14:32:25.982660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.184 [2024-07-12 14:32:25.982674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.184 [2024-07-12 14:32:25.982681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.184 [2024-07-12 14:32:25.982853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.184 [2024-07-12 14:32:25.983025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.184 [2024-07-12 14:32:25.983033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.184 [2024-07-12 14:32:25.983039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.184 [2024-07-12 14:32:25.985742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.184 [2024-07-12 14:32:25.995139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.184 [2024-07-12 14:32:25.995574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.184 [2024-07-12 14:32:25.995616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.184 [2024-07-12 14:32:25.995637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.184 [2024-07-12 14:32:25.996146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.184 [2024-07-12 14:32:25.996318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.184 [2024-07-12 14:32:25.996325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.184 [2024-07-12 14:32:25.996331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.184 [2024-07-12 14:32:25.999015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.184 [2024-07-12 14:32:26.008079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.184 [2024-07-12 14:32:26.008512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.184 [2024-07-12 14:32:26.008555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.184 [2024-07-12 14:32:26.008576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.184 [2024-07-12 14:32:26.009157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.184 [2024-07-12 14:32:26.009424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.184 [2024-07-12 14:32:26.009432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.184 [2024-07-12 14:32:26.009438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.184 [2024-07-12 14:32:26.012119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.184 [2024-07-12 14:32:26.020894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.184 [2024-07-12 14:32:26.021325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.184 [2024-07-12 14:32:26.021366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.184 [2024-07-12 14:32:26.021402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.184 [2024-07-12 14:32:26.021989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.184 [2024-07-12 14:32:26.022416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.184 [2024-07-12 14:32:26.022424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.184 [2024-07-12 14:32:26.022430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.184 [2024-07-12 14:32:26.025207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.184 [2024-07-12 14:32:26.033755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.184 [2024-07-12 14:32:26.034193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.184 [2024-07-12 14:32:26.034234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.184 [2024-07-12 14:32:26.034256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.184 [2024-07-12 14:32:26.034777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.184 [2024-07-12 14:32:26.035031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.184 [2024-07-12 14:32:26.035042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.184 [2024-07-12 14:32:26.035051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.184 [2024-07-12 14:32:26.039117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.184 [2024-07-12 14:32:26.047111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.184 [2024-07-12 14:32:26.047437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.184 [2024-07-12 14:32:26.047452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.185 [2024-07-12 14:32:26.047459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.185 [2024-07-12 14:32:26.047625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.185 [2024-07-12 14:32:26.047792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.185 [2024-07-12 14:32:26.047799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.185 [2024-07-12 14:32:26.047805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.185 [2024-07-12 14:32:26.050545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.185 [2024-07-12 14:32:26.060080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.185 [2024-07-12 14:32:26.060486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.185 [2024-07-12 14:32:26.060502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.185 [2024-07-12 14:32:26.060509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.185 [2024-07-12 14:32:26.060684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.185 [2024-07-12 14:32:26.060851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.185 [2024-07-12 14:32:26.060858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.185 [2024-07-12 14:32:26.060867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.185 [2024-07-12 14:32:26.063569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.185 [2024-07-12 14:32:26.072972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.185 [2024-07-12 14:32:26.073405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.185 [2024-07-12 14:32:26.073447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.185 [2024-07-12 14:32:26.073469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.185 [2024-07-12 14:32:26.074047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.185 [2024-07-12 14:32:26.074422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.185 [2024-07-12 14:32:26.074430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.185 [2024-07-12 14:32:26.074436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.185 [2024-07-12 14:32:26.077160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.185 [2024-07-12 14:32:26.085782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.185 [2024-07-12 14:32:26.086189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.185 [2024-07-12 14:32:26.086230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.185 [2024-07-12 14:32:26.086250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.185 [2024-07-12 14:32:26.086666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.185 [2024-07-12 14:32:26.086838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.185 [2024-07-12 14:32:26.086846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.185 [2024-07-12 14:32:26.086852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.185 [2024-07-12 14:32:26.089569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.185 [2024-07-12 14:32:26.098634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.185 [2024-07-12 14:32:26.098965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.185 [2024-07-12 14:32:26.098981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.185 [2024-07-12 14:32:26.098988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.185 [2024-07-12 14:32:26.099160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.185 [2024-07-12 14:32:26.099336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.185 [2024-07-12 14:32:26.099344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.185 [2024-07-12 14:32:26.099350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.185 [2024-07-12 14:32:26.102045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.185 [2024-07-12 14:32:26.111565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.185 [2024-07-12 14:32:26.111953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.185 [2024-07-12 14:32:26.111968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.185 [2024-07-12 14:32:26.111975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.185 [2024-07-12 14:32:26.112147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.185 [2024-07-12 14:32:26.112318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.185 [2024-07-12 14:32:26.112326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.185 [2024-07-12 14:32:26.112332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.185 [2024-07-12 14:32:26.115028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.185 [2024-07-12 14:32:26.124536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.185 [2024-07-12 14:32:26.124896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.185 [2024-07-12 14:32:26.124911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.185 [2024-07-12 14:32:26.124918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.185 [2024-07-12 14:32:26.125094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.185 [2024-07-12 14:32:26.125271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.185 [2024-07-12 14:32:26.125279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.185 [2024-07-12 14:32:26.125285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.185 [2024-07-12 14:32:26.128126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.185 [2024-07-12 14:32:26.137607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.185 [2024-07-12 14:32:26.138095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.185 [2024-07-12 14:32:26.138137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.185 [2024-07-12 14:32:26.138159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.185 [2024-07-12 14:32:26.138664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.185 [2024-07-12 14:32:26.138837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.185 [2024-07-12 14:32:26.138845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.185 [2024-07-12 14:32:26.138851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.185 [2024-07-12 14:32:26.141654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.185 [2024-07-12 14:32:26.150592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.185 [2024-07-12 14:32:26.150948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.185 [2024-07-12 14:32:26.150963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.185 [2024-07-12 14:32:26.150970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.185 [2024-07-12 14:32:26.151148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.185 [2024-07-12 14:32:26.151321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.185 [2024-07-12 14:32:26.151329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.185 [2024-07-12 14:32:26.151335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.185 [2024-07-12 14:32:26.154148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.185 [2024-07-12 14:32:26.163532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.185 [2024-07-12 14:32:26.163959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.185 [2024-07-12 14:32:26.163974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.185 [2024-07-12 14:32:26.163981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.185 [2024-07-12 14:32:26.164153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.185 [2024-07-12 14:32:26.164325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.185 [2024-07-12 14:32:26.164333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.185 [2024-07-12 14:32:26.164339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.185 [2024-07-12 14:32:26.167082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.185 [2024-07-12 14:32:26.176399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.185 [2024-07-12 14:32:26.176742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.185 [2024-07-12 14:32:26.176757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.185 [2024-07-12 14:32:26.176764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.185 [2024-07-12 14:32:26.176935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.185 [2024-07-12 14:32:26.177107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.185 [2024-07-12 14:32:26.177115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.185 [2024-07-12 14:32:26.177121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.185 [2024-07-12 14:32:26.179819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.185 [2024-07-12 14:32:26.189440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.185 [2024-07-12 14:32:26.189778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.185 [2024-07-12 14:32:26.189795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.185 [2024-07-12 14:32:26.189802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.185 [2024-07-12 14:32:26.189974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.185 [2024-07-12 14:32:26.190146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.185 [2024-07-12 14:32:26.190154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.185 [2024-07-12 14:32:26.190163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.444 [2024-07-12 14:32:26.193010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.444 [2024-07-12 14:32:26.202277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.444 [2024-07-12 14:32:26.202705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-07-12 14:32:26.202721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.444 [2024-07-12 14:32:26.202727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.444 [2024-07-12 14:32:26.202899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.444 [2024-07-12 14:32:26.203072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.444 [2024-07-12 14:32:26.203080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.444 [2024-07-12 14:32:26.203086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.444 [2024-07-12 14:32:26.205784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.444 [2024-07-12 14:32:26.215187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.444 [2024-07-12 14:32:26.215535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-07-12 14:32:26.215551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.444 [2024-07-12 14:32:26.215557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.444 [2024-07-12 14:32:26.215729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.444 [2024-07-12 14:32:26.215902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.444 [2024-07-12 14:32:26.215909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.444 [2024-07-12 14:32:26.215915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.444 [2024-07-12 14:32:26.218627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.444 [2024-07-12 14:32:26.228113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.444 [2024-07-12 14:32:26.228540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-07-12 14:32:26.228556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.444 [2024-07-12 14:32:26.228563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.444 [2024-07-12 14:32:26.228733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.444 [2024-07-12 14:32:26.228906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.444 [2024-07-12 14:32:26.228914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.444 [2024-07-12 14:32:26.228920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.444 [2024-07-12 14:32:26.231625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.444 [2024-07-12 14:32:26.240991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.444 [2024-07-12 14:32:26.241394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-07-12 14:32:26.241441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.444 [2024-07-12 14:32:26.241463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.444 [2024-07-12 14:32:26.242036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.444 [2024-07-12 14:32:26.242199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.444 [2024-07-12 14:32:26.242207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.444 [2024-07-12 14:32:26.242212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.444 [2024-07-12 14:32:26.244911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.444 [2024-07-12 14:32:26.253853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.444 [2024-07-12 14:32:26.254252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-07-12 14:32:26.254267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.444 [2024-07-12 14:32:26.254273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.444 [2024-07-12 14:32:26.254451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.444 [2024-07-12 14:32:26.254624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.444 [2024-07-12 14:32:26.254631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.444 [2024-07-12 14:32:26.254637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.444 [2024-07-12 14:32:26.257319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.444 [2024-07-12 14:32:26.266790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.444 [2024-07-12 14:32:26.267217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-07-12 14:32:26.267232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.444 [2024-07-12 14:32:26.267239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.444 [2024-07-12 14:32:26.267417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.444 [2024-07-12 14:32:26.267590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.444 [2024-07-12 14:32:26.267597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.444 [2024-07-12 14:32:26.267603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.444 [2024-07-12 14:32:26.270284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.444 [2024-07-12 14:32:26.279739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.444 [2024-07-12 14:32:26.280166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-07-12 14:32:26.280206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.444 [2024-07-12 14:32:26.280228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.444 [2024-07-12 14:32:26.280821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.444 [2024-07-12 14:32:26.281055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.444 [2024-07-12 14:32:26.281063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.444 [2024-07-12 14:32:26.281069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.444 [2024-07-12 14:32:26.283753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.444 [2024-07-12 14:32:26.292628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.444 [2024-07-12 14:32:26.293049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.444 [2024-07-12 14:32:26.293064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.444 [2024-07-12 14:32:26.293071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.444 [2024-07-12 14:32:26.293243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.444 [2024-07-12 14:32:26.293426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.444 [2024-07-12 14:32:26.293435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.445 [2024-07-12 14:32:26.293441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.445 [2024-07-12 14:32:26.296123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.445 [2024-07-12 14:32:26.305486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.445 [2024-07-12 14:32:26.305910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-07-12 14:32:26.305925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.445 [2024-07-12 14:32:26.305932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.445 [2024-07-12 14:32:26.306104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.445 [2024-07-12 14:32:26.306276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.445 [2024-07-12 14:32:26.306284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.445 [2024-07-12 14:32:26.306289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.445 [2024-07-12 14:32:26.308979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.445 [2024-07-12 14:32:26.318316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.445 [2024-07-12 14:32:26.318751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-07-12 14:32:26.318793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.445 [2024-07-12 14:32:26.318814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.445 [2024-07-12 14:32:26.319313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.445 [2024-07-12 14:32:26.319491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.445 [2024-07-12 14:32:26.319499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.445 [2024-07-12 14:32:26.319505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.445 [2024-07-12 14:32:26.322193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.445 [2024-07-12 14:32:26.331285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.445 [2024-07-12 14:32:26.331651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-07-12 14:32:26.331692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.445 [2024-07-12 14:32:26.331714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.445 [2024-07-12 14:32:26.332229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.445 [2024-07-12 14:32:26.332409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.445 [2024-07-12 14:32:26.332416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.445 [2024-07-12 14:32:26.332423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.445 [2024-07-12 14:32:26.335169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.445 [2024-07-12 14:32:26.344156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.445 [2024-07-12 14:32:26.344518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-07-12 14:32:26.344534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.445 [2024-07-12 14:32:26.344541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.445 [2024-07-12 14:32:26.344713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.445 [2024-07-12 14:32:26.344886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.445 [2024-07-12 14:32:26.344894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.445 [2024-07-12 14:32:26.344900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.445 [2024-07-12 14:32:26.347613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.445 [2024-07-12 14:32:26.357121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.445 [2024-07-12 14:32:26.357571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-07-12 14:32:26.357612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.445 [2024-07-12 14:32:26.357634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.445 [2024-07-12 14:32:26.358127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.445 [2024-07-12 14:32:26.358398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.445 [2024-07-12 14:32:26.358409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.445 [2024-07-12 14:32:26.358419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.445 [2024-07-12 14:32:26.362493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.445 [2024-07-12 14:32:26.370650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.445 [2024-07-12 14:32:26.371115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-07-12 14:32:26.371131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.445 [2024-07-12 14:32:26.371141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.445 [2024-07-12 14:32:26.371314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.445 [2024-07-12 14:32:26.371492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.445 [2024-07-12 14:32:26.371501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.445 [2024-07-12 14:32:26.371507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.445 [2024-07-12 14:32:26.374272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.445 [2024-07-12 14:32:26.383582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.445 [2024-07-12 14:32:26.383977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-07-12 14:32:26.383993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.445 [2024-07-12 14:32:26.384000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.445 [2024-07-12 14:32:26.384177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.445 [2024-07-12 14:32:26.384355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.445 [2024-07-12 14:32:26.384362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.445 [2024-07-12 14:32:26.384368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.445 [2024-07-12 14:32:26.387208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.445 [2024-07-12 14:32:26.396714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.445 [2024-07-12 14:32:26.397176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-07-12 14:32:26.397217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.445 [2024-07-12 14:32:26.397239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.445 [2024-07-12 14:32:26.397721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.445 [2024-07-12 14:32:26.397895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.445 [2024-07-12 14:32:26.397903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.445 [2024-07-12 14:32:26.397908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.445 [2024-07-12 14:32:26.400658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.445 [2024-07-12 14:32:26.409815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.445 [2024-07-12 14:32:26.410200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.445 [2024-07-12 14:32:26.410241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.445 [2024-07-12 14:32:26.410263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.445 [2024-07-12 14:32:26.410854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.445 [2024-07-12 14:32:26.411446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.446 [2024-07-12 14:32:26.411478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.446 [2024-07-12 14:32:26.411498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.446 [2024-07-12 14:32:26.414224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.446 [2024-07-12 14:32:26.422611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.446 [2024-07-12 14:32:26.423041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-07-12 14:32:26.423055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.446 [2024-07-12 14:32:26.423062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.446 [2024-07-12 14:32:26.423224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.446 [2024-07-12 14:32:26.423392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.446 [2024-07-12 14:32:26.423400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.446 [2024-07-12 14:32:26.423422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.446 [2024-07-12 14:32:26.426108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.446 [2024-07-12 14:32:26.435463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.446 [2024-07-12 14:32:26.435886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-07-12 14:32:26.435901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.446 [2024-07-12 14:32:26.435907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.446 [2024-07-12 14:32:26.436070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.446 [2024-07-12 14:32:26.436231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.446 [2024-07-12 14:32:26.436239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.446 [2024-07-12 14:32:26.436244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.446 [2024-07-12 14:32:26.438944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.446 [2024-07-12 14:32:26.448382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.446 [2024-07-12 14:32:26.448728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.446 [2024-07-12 14:32:26.448743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.446 [2024-07-12 14:32:26.448750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.446 [2024-07-12 14:32:26.448921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.446 [2024-07-12 14:32:26.449093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.446 [2024-07-12 14:32:26.449101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.446 [2024-07-12 14:32:26.449107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.705 [2024-07-12 14:32:26.451923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.705 [2024-07-12 14:32:26.461229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.705 [2024-07-12 14:32:26.461573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-12 14:32:26.461588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.705 [2024-07-12 14:32:26.461595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.705 [2024-07-12 14:32:26.461766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.705 [2024-07-12 14:32:26.461938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.705 [2024-07-12 14:32:26.461945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.705 [2024-07-12 14:32:26.461951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.705 [2024-07-12 14:32:26.464661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.705 [2024-07-12 14:32:26.474231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.705 [2024-07-12 14:32:26.474668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-12 14:32:26.474684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.705 [2024-07-12 14:32:26.474691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.705 [2024-07-12 14:32:26.474863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.705 [2024-07-12 14:32:26.475034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.705 [2024-07-12 14:32:26.475042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.705 [2024-07-12 14:32:26.475048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.705 [2024-07-12 14:32:26.477820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.705 [2024-07-12 14:32:26.487060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.705 [2024-07-12 14:32:26.487375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-12 14:32:26.487395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.705 [2024-07-12 14:32:26.487401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.705 [2024-07-12 14:32:26.487588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.705 [2024-07-12 14:32:26.487761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.705 [2024-07-12 14:32:26.487769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.705 [2024-07-12 14:32:26.487775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.705 [2024-07-12 14:32:26.490493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.705 [2024-07-12 14:32:26.500092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.705 [2024-07-12 14:32:26.500508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-12 14:32:26.500552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.705 [2024-07-12 14:32:26.500574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.705 [2024-07-12 14:32:26.501161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.705 [2024-07-12 14:32:26.501718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.705 [2024-07-12 14:32:26.501727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.705 [2024-07-12 14:32:26.501733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.705 [2024-07-12 14:32:26.504451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.705 [2024-07-12 14:32:26.513030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.705 [2024-07-12 14:32:26.513436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-12 14:32:26.513452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.705 [2024-07-12 14:32:26.513459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.705 [2024-07-12 14:32:26.513630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.705 [2024-07-12 14:32:26.513803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.705 [2024-07-12 14:32:26.513811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.705 [2024-07-12 14:32:26.513817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.705 [2024-07-12 14:32:26.516529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.705 [2024-07-12 14:32:26.525887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.705 [2024-07-12 14:32:26.526332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-12 14:32:26.526400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.705 [2024-07-12 14:32:26.526423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.705 [2024-07-12 14:32:26.527001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.705 [2024-07-12 14:32:26.527591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.705 [2024-07-12 14:32:26.527617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.705 [2024-07-12 14:32:26.527637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.705 [2024-07-12 14:32:26.530425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.705 [2024-07-12 14:32:26.538885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.705 [2024-07-12 14:32:26.539319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-12 14:32:26.539334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.705 [2024-07-12 14:32:26.539340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.705 [2024-07-12 14:32:26.539519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.705 [2024-07-12 14:32:26.539691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.705 [2024-07-12 14:32:26.539699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.705 [2024-07-12 14:32:26.539708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.705 [2024-07-12 14:32:26.542397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.705 [2024-07-12 14:32:26.551816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.705 [2024-07-12 14:32:26.552244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.705 [2024-07-12 14:32:26.552259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.705 [2024-07-12 14:32:26.552266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.705 [2024-07-12 14:32:26.552444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.705 [2024-07-12 14:32:26.552616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.705 [2024-07-12 14:32:26.552624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.706 [2024-07-12 14:32:26.552630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.706 [2024-07-12 14:32:26.555316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.706 [2024-07-12 14:32:26.564669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.706 [2024-07-12 14:32:26.565078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-12 14:32:26.565094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.706 [2024-07-12 14:32:26.565101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.706 [2024-07-12 14:32:26.565273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.706 [2024-07-12 14:32:26.565456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.706 [2024-07-12 14:32:26.565465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.706 [2024-07-12 14:32:26.565471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.706 [2024-07-12 14:32:26.568156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.706 [2024-07-12 14:32:26.577616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.706 [2024-07-12 14:32:26.578044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-12 14:32:26.578086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.706 [2024-07-12 14:32:26.578107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.706 [2024-07-12 14:32:26.578653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.706 [2024-07-12 14:32:26.578827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.706 [2024-07-12 14:32:26.578835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.706 [2024-07-12 14:32:26.578840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.706 [2024-07-12 14:32:26.581532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.706 [2024-07-12 14:32:26.590429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.706 [2024-07-12 14:32:26.590837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-12 14:32:26.590851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.706 [2024-07-12 14:32:26.590858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.706 [2024-07-12 14:32:26.591021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.706 [2024-07-12 14:32:26.591182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.706 [2024-07-12 14:32:26.591190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.706 [2024-07-12 14:32:26.591196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.706 [2024-07-12 14:32:26.593895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.706 [2024-07-12 14:32:26.603300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.706 [2024-07-12 14:32:26.603757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-12 14:32:26.603772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.706 [2024-07-12 14:32:26.603779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.706 [2024-07-12 14:32:26.603951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.706 [2024-07-12 14:32:26.604127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.706 [2024-07-12 14:32:26.604135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.706 [2024-07-12 14:32:26.604141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.706 [2024-07-12 14:32:26.606834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.706 [2024-07-12 14:32:26.616193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.706 [2024-07-12 14:32:26.616655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-12 14:32:26.616670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.706 [2024-07-12 14:32:26.616677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.706 [2024-07-12 14:32:26.616849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.706 [2024-07-12 14:32:26.617025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.706 [2024-07-12 14:32:26.617032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.706 [2024-07-12 14:32:26.617038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.706 [2024-07-12 14:32:26.619743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.706 [2024-07-12 14:32:26.629046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.706 [2024-07-12 14:32:26.629483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-12 14:32:26.629525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.706 [2024-07-12 14:32:26.629546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.706 [2024-07-12 14:32:26.630019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.706 [2024-07-12 14:32:26.630185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.706 [2024-07-12 14:32:26.630193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.706 [2024-07-12 14:32:26.630198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.706 [2024-07-12 14:32:26.632888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.706 [2024-07-12 14:32:26.641947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.706 [2024-07-12 14:32:26.642390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-12 14:32:26.642406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.706 [2024-07-12 14:32:26.642429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.706 [2024-07-12 14:32:26.642605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.706 [2024-07-12 14:32:26.642783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.706 [2024-07-12 14:32:26.642791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.706 [2024-07-12 14:32:26.642797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.706 [2024-07-12 14:32:26.645632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.706 [2024-07-12 14:32:26.654975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.706 [2024-07-12 14:32:26.655394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-12 14:32:26.655409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.706 [2024-07-12 14:32:26.655416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.706 [2024-07-12 14:32:26.655594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.706 [2024-07-12 14:32:26.655777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.706 [2024-07-12 14:32:26.655784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.706 [2024-07-12 14:32:26.655790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.706 [2024-07-12 14:32:26.658556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.706 [2024-07-12 14:32:26.667881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.706 [2024-07-12 14:32:26.668328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.706 [2024-07-12 14:32:26.668343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.706 [2024-07-12 14:32:26.668350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.706 [2024-07-12 14:32:26.668557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.707 [2024-07-12 14:32:26.668730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.707 [2024-07-12 14:32:26.668738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.707 [2024-07-12 14:32:26.668744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.707 [2024-07-12 14:32:26.671433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.707 [2024-07-12 14:32:26.680777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.707 [2024-07-12 14:32:26.681231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-12 14:32:26.681272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.707 [2024-07-12 14:32:26.681293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.707 [2024-07-12 14:32:26.681888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.707 [2024-07-12 14:32:26.682341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.707 [2024-07-12 14:32:26.682352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.707 [2024-07-12 14:32:26.682361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.707 [2024-07-12 14:32:26.686420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.707 [2024-07-12 14:32:26.694369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.707 [2024-07-12 14:32:26.694819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-12 14:32:26.694851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.707 [2024-07-12 14:32:26.694874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.707 [2024-07-12 14:32:26.695467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.707 [2024-07-12 14:32:26.696050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.707 [2024-07-12 14:32:26.696074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.707 [2024-07-12 14:32:26.696093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.707 [2024-07-12 14:32:26.698846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.707 [2024-07-12 14:32:26.707286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.707 [2024-07-12 14:32:26.707741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.707 [2024-07-12 14:32:26.707756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.707 [2024-07-12 14:32:26.707763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.707 [2024-07-12 14:32:26.707935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.707 [2024-07-12 14:32:26.708107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.707 [2024-07-12 14:32:26.708115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.707 [2024-07-12 14:32:26.708121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.707 [2024-07-12 14:32:26.710871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.965 [2024-07-12 14:32:26.720214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.965 [2024-07-12 14:32:26.720593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.965 [2024-07-12 14:32:26.720611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.965 [2024-07-12 14:32:26.720618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.965 [2024-07-12 14:32:26.720790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.966 [2024-07-12 14:32:26.720962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.966 [2024-07-12 14:32:26.720969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.966 [2024-07-12 14:32:26.720975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.966 [2024-07-12 14:32:26.723681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.966 [2024-07-12 14:32:26.733031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.966 [2024-07-12 14:32:26.733504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.966 [2024-07-12 14:32:26.733546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.966 [2024-07-12 14:32:26.733568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.966 [2024-07-12 14:32:26.734090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.966 [2024-07-12 14:32:26.734262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.966 [2024-07-12 14:32:26.734270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.966 [2024-07-12 14:32:26.734276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.966 [2024-07-12 14:32:26.736968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.966 [2024-07-12 14:32:26.745909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.966 [2024-07-12 14:32:26.746307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.966 [2024-07-12 14:32:26.746321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.966 [2024-07-12 14:32:26.746328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.966 [2024-07-12 14:32:26.746518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.966 [2024-07-12 14:32:26.746690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.966 [2024-07-12 14:32:26.746698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.966 [2024-07-12 14:32:26.746704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.966 [2024-07-12 14:32:26.749388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.966 [2024-07-12 14:32:26.758749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.966 [2024-07-12 14:32:26.759177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.966 [2024-07-12 14:32:26.759192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.966 [2024-07-12 14:32:26.759198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.966 [2024-07-12 14:32:26.759360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.966 [2024-07-12 14:32:26.759556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.966 [2024-07-12 14:32:26.759564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.966 [2024-07-12 14:32:26.759570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.966 [2024-07-12 14:32:26.762253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.966 [2024-07-12 14:32:26.771629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.966 [2024-07-12 14:32:26.772054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.966 [2024-07-12 14:32:26.772069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.966 [2024-07-12 14:32:26.772075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.966 [2024-07-12 14:32:26.772238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.966 [2024-07-12 14:32:26.772422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.966 [2024-07-12 14:32:26.772431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.966 [2024-07-12 14:32:26.772437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.966 [2024-07-12 14:32:26.775118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.966 [2024-07-12 14:32:26.784451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.966 [2024-07-12 14:32:26.784879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.966 [2024-07-12 14:32:26.784920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.966 [2024-07-12 14:32:26.784942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.966 [2024-07-12 14:32:26.785372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.966 [2024-07-12 14:32:26.785572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.966 [2024-07-12 14:32:26.785580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.966 [2024-07-12 14:32:26.785586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.966 [2024-07-12 14:32:26.788282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.966 [2024-07-12 14:32:26.797338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.966 [2024-07-12 14:32:26.797764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.966 [2024-07-12 14:32:26.797779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.966 [2024-07-12 14:32:26.797785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.966 [2024-07-12 14:32:26.797948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.966 [2024-07-12 14:32:26.798109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.966 [2024-07-12 14:32:26.798116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.966 [2024-07-12 14:32:26.798122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.966 [2024-07-12 14:32:26.800822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.966 [2024-07-12 14:32:26.810239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.966 [2024-07-12 14:32:26.810669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.966 [2024-07-12 14:32:26.810684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.966 [2024-07-12 14:32:26.810691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.966 [2024-07-12 14:32:26.810863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.966 [2024-07-12 14:32:26.811034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.966 [2024-07-12 14:32:26.811042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.966 [2024-07-12 14:32:26.811048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.966 [2024-07-12 14:32:26.813753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.966 [2024-07-12 14:32:26.823160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.966 [2024-07-12 14:32:26.823508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.966 [2024-07-12 14:32:26.823523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.966 [2024-07-12 14:32:26.823530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.966 [2024-07-12 14:32:26.823701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.966 [2024-07-12 14:32:26.823874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.966 [2024-07-12 14:32:26.823882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.966 [2024-07-12 14:32:26.823888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.966 [2024-07-12 14:32:26.826577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.966 [2024-07-12 14:32:26.836086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.966 [2024-07-12 14:32:26.836451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.966 [2024-07-12 14:32:26.836493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.966 [2024-07-12 14:32:26.836514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.966 [2024-07-12 14:32:26.837092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.966 [2024-07-12 14:32:26.837528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.966 [2024-07-12 14:32:26.837536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.966 [2024-07-12 14:32:26.837542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.966 [2024-07-12 14:32:26.840268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.966 [2024-07-12 14:32:26.849014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.966 [2024-07-12 14:32:26.849461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.966 [2024-07-12 14:32:26.849476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.966 [2024-07-12 14:32:26.849487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.966 [2024-07-12 14:32:26.849660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.966 [2024-07-12 14:32:26.849833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.966 [2024-07-12 14:32:26.849842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.967 [2024-07-12 14:32:26.849848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.967 [2024-07-12 14:32:26.852562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.967 [2024-07-12 14:32:26.861877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.967 [2024-07-12 14:32:26.862179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.967 [2024-07-12 14:32:26.862195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.967 [2024-07-12 14:32:26.862201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.967 [2024-07-12 14:32:26.862373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.967 [2024-07-12 14:32:26.862552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.967 [2024-07-12 14:32:26.862561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.967 [2024-07-12 14:32:26.862566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.967 [2024-07-12 14:32:26.865252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.967 [2024-07-12 14:32:26.875079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.967 [2024-07-12 14:32:26.875516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.967 [2024-07-12 14:32:26.875532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.967 [2024-07-12 14:32:26.875539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.967 [2024-07-12 14:32:26.875716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.967 [2024-07-12 14:32:26.875894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.967 [2024-07-12 14:32:26.875902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.967 [2024-07-12 14:32:26.875908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.967 [2024-07-12 14:32:26.878758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.967 [2024-07-12 14:32:26.888052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.967 [2024-07-12 14:32:26.888503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.967 [2024-07-12 14:32:26.888555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.967 [2024-07-12 14:32:26.888577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.967 [2024-07-12 14:32:26.889156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.967 [2024-07-12 14:32:26.889749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.967 [2024-07-12 14:32:26.889782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.967 [2024-07-12 14:32:26.889808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.967 [2024-07-12 14:32:26.892573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.967 [2024-07-12 14:32:26.901133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.967 [2024-07-12 14:32:26.901570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.967 [2024-07-12 14:32:26.901586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.967 [2024-07-12 14:32:26.901593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.967 [2024-07-12 14:32:26.901769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.967 [2024-07-12 14:32:26.901947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.967 [2024-07-12 14:32:26.901955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.967 [2024-07-12 14:32:26.901961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.967 [2024-07-12 14:32:26.904771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.967 [2024-07-12 14:32:26.914143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.967 [2024-07-12 14:32:26.914567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.967 [2024-07-12 14:32:26.914582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.967 [2024-07-12 14:32:26.914588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.967 [2024-07-12 14:32:26.914750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.967 [2024-07-12 14:32:26.914912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.967 [2024-07-12 14:32:26.914919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.967 [2024-07-12 14:32:26.914924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.967 [2024-07-12 14:32:26.917693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.967 [2024-07-12 14:32:26.927048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.967 [2024-07-12 14:32:26.927414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.967 [2024-07-12 14:32:26.927430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.967 [2024-07-12 14:32:26.927436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.967 [2024-07-12 14:32:26.927607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.967 [2024-07-12 14:32:26.927784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.967 [2024-07-12 14:32:26.927791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.967 [2024-07-12 14:32:26.927797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.967 [2024-07-12 14:32:26.930522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.967 [2024-07-12 14:32:26.940015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.967 [2024-07-12 14:32:26.940410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.967 [2024-07-12 14:32:26.940450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.967 [2024-07-12 14:32:26.940471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.967 [2024-07-12 14:32:26.941050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.967 [2024-07-12 14:32:26.941648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.967 [2024-07-12 14:32:26.941674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.967 [2024-07-12 14:32:26.941694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.967 [2024-07-12 14:32:26.944429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.967 [2024-07-12 14:32:26.952954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.967 [2024-07-12 14:32:26.953307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.967 [2024-07-12 14:32:26.953321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.967 [2024-07-12 14:32:26.953328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.967 [2024-07-12 14:32:26.953505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.967 [2024-07-12 14:32:26.953678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.967 [2024-07-12 14:32:26.953685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.967 [2024-07-12 14:32:26.953691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.967 [2024-07-12 14:32:26.956381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.967 [2024-07-12 14:32:26.965906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:34.967 [2024-07-12 14:32:26.966263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.967 [2024-07-12 14:32:26.966279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:34.967 [2024-07-12 14:32:26.966285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:34.967 [2024-07-12 14:32:26.966461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:34.967 [2024-07-12 14:32:26.966634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:34.967 [2024-07-12 14:32:26.966642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:34.967 [2024-07-12 14:32:26.966648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:34.967 [2024-07-12 14:32:26.969411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.227 [2024-07-12 14:32:26.978971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.227 [2024-07-12 14:32:26.979319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.227 [2024-07-12 14:32:26.979334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.227 [2024-07-12 14:32:26.979341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.227 [2024-07-12 14:32:26.979535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.227 [2024-07-12 14:32:26.979709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.227 [2024-07-12 14:32:26.979717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.227 [2024-07-12 14:32:26.979723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.227 [2024-07-12 14:32:26.982444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.227 [2024-07-12 14:32:26.991904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.227 [2024-07-12 14:32:26.992326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.227 [2024-07-12 14:32:26.992367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.227 [2024-07-12 14:32:26.992402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.227 [2024-07-12 14:32:26.992895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.227 [2024-07-12 14:32:26.993069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.227 [2024-07-12 14:32:26.993076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.227 [2024-07-12 14:32:26.993082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.227 [2024-07-12 14:32:26.995809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.227 [2024-07-12 14:32:27.004855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.227 [2024-07-12 14:32:27.005237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.228 [2024-07-12 14:32:27.005253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.228 [2024-07-12 14:32:27.005259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.228 [2024-07-12 14:32:27.005438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.228 [2024-07-12 14:32:27.005611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.228 [2024-07-12 14:32:27.005619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.228 [2024-07-12 14:32:27.005625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.228 [2024-07-12 14:32:27.008311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.228 [2024-07-12 14:32:27.017819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.228 [2024-07-12 14:32:27.018153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.228 [2024-07-12 14:32:27.018168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.228 [2024-07-12 14:32:27.018175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.228 [2024-07-12 14:32:27.018346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.228 [2024-07-12 14:32:27.018525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.228 [2024-07-12 14:32:27.018533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.228 [2024-07-12 14:32:27.018542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.228 [2024-07-12 14:32:27.021221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.228 [2024-07-12 14:32:27.030789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.228 [2024-07-12 14:32:27.031231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.228 [2024-07-12 14:32:27.031279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.228 [2024-07-12 14:32:27.031300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.228 [2024-07-12 14:32:27.031830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.228 [2024-07-12 14:32:27.032003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.228 [2024-07-12 14:32:27.032010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.228 [2024-07-12 14:32:27.032016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.228 [2024-07-12 14:32:27.034703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.228 [2024-07-12 14:32:27.043719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.228 [2024-07-12 14:32:27.044081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.228 [2024-07-12 14:32:27.044096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.228 [2024-07-12 14:32:27.044103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.228 [2024-07-12 14:32:27.044275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.228 [2024-07-12 14:32:27.044452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.228 [2024-07-12 14:32:27.044460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.228 [2024-07-12 14:32:27.044466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.228 [2024-07-12 14:32:27.047154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.228 [2024-07-12 14:32:27.056671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.228 [2024-07-12 14:32:27.056970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.228 [2024-07-12 14:32:27.056986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.228 [2024-07-12 14:32:27.056992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.228 [2024-07-12 14:32:27.057165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.228 [2024-07-12 14:32:27.057337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.228 [2024-07-12 14:32:27.057344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.228 [2024-07-12 14:32:27.057350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.228 [2024-07-12 14:32:27.060052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.228 [2024-07-12 14:32:27.069567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.228 [2024-07-12 14:32:27.069949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.228 [2024-07-12 14:32:27.069968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.228 [2024-07-12 14:32:27.069975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.228 [2024-07-12 14:32:27.070147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.228 [2024-07-12 14:32:27.070319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.228 [2024-07-12 14:32:27.070327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.228 [2024-07-12 14:32:27.070333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.228 [2024-07-12 14:32:27.073024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.228 [2024-07-12 14:32:27.082536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.228 [2024-07-12 14:32:27.082979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.228 [2024-07-12 14:32:27.082995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.228 [2024-07-12 14:32:27.083001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.228 [2024-07-12 14:32:27.083173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.228 [2024-07-12 14:32:27.083345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.228 [2024-07-12 14:32:27.083353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.228 [2024-07-12 14:32:27.083359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.228 [2024-07-12 14:32:27.086048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.228 [2024-07-12 14:32:27.095376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.228 [2024-07-12 14:32:27.095664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.228 [2024-07-12 14:32:27.095680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.228 [2024-07-12 14:32:27.095686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.228 [2024-07-12 14:32:27.095857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.228 [2024-07-12 14:32:27.096029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.228 [2024-07-12 14:32:27.096036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.228 [2024-07-12 14:32:27.096042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.228 [2024-07-12 14:32:27.098778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.228 [2024-07-12 14:32:27.108269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.228 [2024-07-12 14:32:27.108660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.228 [2024-07-12 14:32:27.108702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.228 [2024-07-12 14:32:27.108723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.228 [2024-07-12 14:32:27.109182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.228 [2024-07-12 14:32:27.109357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.228 [2024-07-12 14:32:27.109365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.228 [2024-07-12 14:32:27.109371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.228 [2024-07-12 14:32:27.112114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.228 [2024-07-12 14:32:27.121224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.228 [2024-07-12 14:32:27.121602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.228 [2024-07-12 14:32:27.121645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.228 [2024-07-12 14:32:27.121667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.228 [2024-07-12 14:32:27.122197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.228 [2024-07-12 14:32:27.122370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.228 [2024-07-12 14:32:27.122383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.228 [2024-07-12 14:32:27.122390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.228 [2024-07-12 14:32:27.125213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.228 [2024-07-12 14:32:27.134287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.228 [2024-07-12 14:32:27.134655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.228 [2024-07-12 14:32:27.134696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.228 [2024-07-12 14:32:27.134718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.228 [2024-07-12 14:32:27.135263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.228 [2024-07-12 14:32:27.135446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.228 [2024-07-12 14:32:27.135454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.228 [2024-07-12 14:32:27.135461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.228 [2024-07-12 14:32:27.138295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.228 [2024-07-12 14:32:27.147358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.229 [2024-07-12 14:32:27.147810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.229 [2024-07-12 14:32:27.147826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.229 [2024-07-12 14:32:27.147833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.229 [2024-07-12 14:32:27.148010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.229 [2024-07-12 14:32:27.148187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.229 [2024-07-12 14:32:27.148195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.229 [2024-07-12 14:32:27.148201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.229 [2024-07-12 14:32:27.151046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.229 [2024-07-12 14:32:27.160438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.229 [2024-07-12 14:32:27.160855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.229 [2024-07-12 14:32:27.160871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.229 [2024-07-12 14:32:27.160877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.229 [2024-07-12 14:32:27.161049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.229 [2024-07-12 14:32:27.161221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.229 [2024-07-12 14:32:27.161229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.229 [2024-07-12 14:32:27.161234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.229 [2024-07-12 14:32:27.164023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.229 [2024-07-12 14:32:27.173502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.229 [2024-07-12 14:32:27.173844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.229 [2024-07-12 14:32:27.173858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.229 [2024-07-12 14:32:27.173865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.229 [2024-07-12 14:32:27.174036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.229 [2024-07-12 14:32:27.174209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.229 [2024-07-12 14:32:27.174217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.229 [2024-07-12 14:32:27.174223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.229 [2024-07-12 14:32:27.177005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.229 [2024-07-12 14:32:27.186629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.229 [2024-07-12 14:32:27.187002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.229 [2024-07-12 14:32:27.187017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.229 [2024-07-12 14:32:27.187024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.229 [2024-07-12 14:32:27.187196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.229 [2024-07-12 14:32:27.187367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.229 [2024-07-12 14:32:27.187375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.229 [2024-07-12 14:32:27.187388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.229 [2024-07-12 14:32:27.190124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.229 [2024-07-12 14:32:27.199568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.229 [2024-07-12 14:32:27.199911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.229 [2024-07-12 14:32:27.199927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.229 [2024-07-12 14:32:27.199937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.229 [2024-07-12 14:32:27.200110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.229 [2024-07-12 14:32:27.200281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.229 [2024-07-12 14:32:27.200289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.229 [2024-07-12 14:32:27.200295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.229 [2024-07-12 14:32:27.203016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.229 [2024-07-12 14:32:27.212525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.229 [2024-07-12 14:32:27.212865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.229 [2024-07-12 14:32:27.212881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.229 [2024-07-12 14:32:27.212887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.229 [2024-07-12 14:32:27.213060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.229 [2024-07-12 14:32:27.213236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.229 [2024-07-12 14:32:27.213244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.229 [2024-07-12 14:32:27.213250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.229 [2024-07-12 14:32:27.215987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.229 [2024-07-12 14:32:27.225532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.229 [2024-07-12 14:32:27.225894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.229 [2024-07-12 14:32:27.225909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.229 [2024-07-12 14:32:27.225917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.229 [2024-07-12 14:32:27.226088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.229 [2024-07-12 14:32:27.226261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.229 [2024-07-12 14:32:27.226269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.229 [2024-07-12 14:32:27.226275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.229 [2024-07-12 14:32:27.229017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.490 [2024-07-12 14:32:27.238516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.490 [2024-07-12 14:32:27.238908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.490 [2024-07-12 14:32:27.238923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.490 [2024-07-12 14:32:27.238930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.490 [2024-07-12 14:32:27.239102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.490 [2024-07-12 14:32:27.239273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.490 [2024-07-12 14:32:27.239284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.490 [2024-07-12 14:32:27.239290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.490 [2024-07-12 14:32:27.242053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.490 [2024-07-12 14:32:27.251565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.490 [2024-07-12 14:32:27.252011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.490 [2024-07-12 14:32:27.252026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.490 [2024-07-12 14:32:27.252033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.490 [2024-07-12 14:32:27.252204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.490 [2024-07-12 14:32:27.252376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.490 [2024-07-12 14:32:27.252390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.490 [2024-07-12 14:32:27.252396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.490 [2024-07-12 14:32:27.255125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.490 [2024-07-12 14:32:27.264499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.490 [2024-07-12 14:32:27.264947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.490 [2024-07-12 14:32:27.264993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.490 [2024-07-12 14:32:27.265015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.490 [2024-07-12 14:32:27.265575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.490 [2024-07-12 14:32:27.265749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.490 [2024-07-12 14:32:27.265756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.490 [2024-07-12 14:32:27.265762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.490 [2024-07-12 14:32:27.268449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.490 [2024-07-12 14:32:27.277343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.490 [2024-07-12 14:32:27.277766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.490 [2024-07-12 14:32:27.277781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.490 [2024-07-12 14:32:27.277788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.490 [2024-07-12 14:32:27.277960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.490 [2024-07-12 14:32:27.278131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.490 [2024-07-12 14:32:27.278139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.490 [2024-07-12 14:32:27.278145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.490 [2024-07-12 14:32:27.280917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.490 [2024-07-12 14:32:27.290405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.490 [2024-07-12 14:32:27.290688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.490 [2024-07-12 14:32:27.290703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.490 [2024-07-12 14:32:27.290709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.491 [2024-07-12 14:32:27.290881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.491 [2024-07-12 14:32:27.291053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.491 [2024-07-12 14:32:27.291061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.491 [2024-07-12 14:32:27.291067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.491 [2024-07-12 14:32:27.293885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.491 [2024-07-12 14:32:27.303292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.491 [2024-07-12 14:32:27.303722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.491 [2024-07-12 14:32:27.303737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.491 [2024-07-12 14:32:27.303744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.491 [2024-07-12 14:32:27.303916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.491 [2024-07-12 14:32:27.304091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.491 [2024-07-12 14:32:27.304098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.491 [2024-07-12 14:32:27.304104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.491 [2024-07-12 14:32:27.306918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.491 [2024-07-12 14:32:27.316254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.491 [2024-07-12 14:32:27.316702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.491 [2024-07-12 14:32:27.316717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.491 [2024-07-12 14:32:27.316724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.491 [2024-07-12 14:32:27.316896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.491 [2024-07-12 14:32:27.317068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.491 [2024-07-12 14:32:27.317076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.491 [2024-07-12 14:32:27.317082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.491 [2024-07-12 14:32:27.319813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2697758 Killed "${NVMF_APP[@]}" "$@" 00:27:35.491 14:32:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:35.491 14:32:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:35.491 14:32:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:35.491 14:32:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:35.491 14:32:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:35.491 [2024-07-12 14:32:27.329334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.491 14:32:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2699188 00:27:35.491 [2024-07-12 14:32:27.329684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.491 [2024-07-12 14:32:27.329700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.491 [2024-07-12 14:32:27.329707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.491 14:32:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2699188 00:27:35.491 [2024-07-12 14:32:27.329883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.491 14:32:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:35.491 [2024-07-12 14:32:27.330062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.491 [2024-07-12 14:32:27.330070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.491 [2024-07-12 14:32:27.330076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.491 14:32:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2699188 ']' 00:27:35.491 14:32:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.491 14:32:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:35.491 14:32:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.491 14:32:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:35.491 14:32:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:35.491 [2024-07-12 14:32:27.332914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.491 [2024-07-12 14:32:27.342478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.491 [2024-07-12 14:32:27.342776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.491 [2024-07-12 14:32:27.342792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.491 [2024-07-12 14:32:27.342798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.491 [2024-07-12 14:32:27.342976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.491 [2024-07-12 14:32:27.343154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.491 [2024-07-12 14:32:27.343161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.491 [2024-07-12 14:32:27.343168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.491 [2024-07-12 14:32:27.346011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.491 [2024-07-12 14:32:27.355550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.491 [2024-07-12 14:32:27.355916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.491 [2024-07-12 14:32:27.355931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.491 [2024-07-12 14:32:27.355938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.491 [2024-07-12 14:32:27.356118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.491 [2024-07-12 14:32:27.356295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.491 [2024-07-12 14:32:27.356304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.491 [2024-07-12 14:32:27.356311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.491 [2024-07-12 14:32:27.359293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.491 [2024-07-12 14:32:27.368673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.491 [2024-07-12 14:32:27.369130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.491 [2024-07-12 14:32:27.369145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.491 [2024-07-12 14:32:27.369153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.492 [2024-07-12 14:32:27.369329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.492 [2024-07-12 14:32:27.369511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.492 [2024-07-12 14:32:27.369519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.492 [2024-07-12 14:32:27.369525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.492 [2024-07-12 14:32:27.372365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.492 [2024-07-12 14:32:27.373738] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:27:35.492 [2024-07-12 14:32:27.373786] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.492 [2024-07-12 14:32:27.381781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.492 [2024-07-12 14:32:27.382219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.492 [2024-07-12 14:32:27.382235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.492 [2024-07-12 14:32:27.382242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.492 [2024-07-12 14:32:27.382425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.492 [2024-07-12 14:32:27.382603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.492 [2024-07-12 14:32:27.382611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.492 [2024-07-12 14:32:27.382618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.492 [2024-07-12 14:32:27.385459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.492 [2024-07-12 14:32:27.394819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.492 [2024-07-12 14:32:27.395191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.492 [2024-07-12 14:32:27.395207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.492 [2024-07-12 14:32:27.395214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.492 [2024-07-12 14:32:27.395401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.492 [2024-07-12 14:32:27.395583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.492 [2024-07-12 14:32:27.395591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.492 [2024-07-12 14:32:27.395598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.492 [2024-07-12 14:32:27.398430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.492 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.492 [2024-07-12 14:32:27.408000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.492 [2024-07-12 14:32:27.408417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.492 [2024-07-12 14:32:27.408434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.492 [2024-07-12 14:32:27.408441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.492 [2024-07-12 14:32:27.408618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.492 [2024-07-12 14:32:27.408796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.492 [2024-07-12 14:32:27.408804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.492 [2024-07-12 14:32:27.408810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.492 [2024-07-12 14:32:27.411649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.492 [2024-07-12 14:32:27.421203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.492 [2024-07-12 14:32:27.421650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.492 [2024-07-12 14:32:27.421666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.492 [2024-07-12 14:32:27.421673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.492 [2024-07-12 14:32:27.421851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.492 [2024-07-12 14:32:27.422030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.492 [2024-07-12 14:32:27.422038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.492 [2024-07-12 14:32:27.422044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.492 [2024-07-12 14:32:27.424884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.492 [2024-07-12 14:32:27.432639] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:35.492 [2024-07-12 14:32:27.434295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.492 [2024-07-12 14:32:27.434747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.492 [2024-07-12 14:32:27.434763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.492 [2024-07-12 14:32:27.434770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.492 [2024-07-12 14:32:27.434948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.492 [2024-07-12 14:32:27.435126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.492 [2024-07-12 14:32:27.435134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.492 [2024-07-12 14:32:27.435144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.492 [2024-07-12 14:32:27.437959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.492 [2024-07-12 14:32:27.447395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.492 [2024-07-12 14:32:27.447787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.492 [2024-07-12 14:32:27.447803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.492 [2024-07-12 14:32:27.447810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.492 [2024-07-12 14:32:27.447987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.492 [2024-07-12 14:32:27.448166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.492 [2024-07-12 14:32:27.448174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.492 [2024-07-12 14:32:27.448180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.492 [2024-07-12 14:32:27.450987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.492 [2024-07-12 14:32:27.460481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.492 [2024-07-12 14:32:27.460916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.492 [2024-07-12 14:32:27.460932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.492 [2024-07-12 14:32:27.460939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.492 [2024-07-12 14:32:27.461116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.492 [2024-07-12 14:32:27.461294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.492 [2024-07-12 14:32:27.461302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.492 [2024-07-12 14:32:27.461309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.492 [2024-07-12 14:32:27.464154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.492 [2024-07-12 14:32:27.473607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.492 [2024-07-12 14:32:27.474075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.492 [2024-07-12 14:32:27.474092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.492 [2024-07-12 14:32:27.474100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.492 [2024-07-12 14:32:27.474278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.492 [2024-07-12 14:32:27.474461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.492 [2024-07-12 14:32:27.474470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.492 [2024-07-12 14:32:27.474476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.492 [2024-07-12 14:32:27.477313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.492 [2024-07-12 14:32:27.486767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.492 [2024-07-12 14:32:27.487191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.492 [2024-07-12 14:32:27.487207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.492 [2024-07-12 14:32:27.487213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.492 [2024-07-12 14:32:27.487390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.492 [2024-07-12 14:32:27.487584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.493 [2024-07-12 14:32:27.487592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.493 [2024-07-12 14:32:27.487598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.493 [2024-07-12 14:32:27.490419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.754 [2024-07-12 14:32:27.499831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.754 [2024-07-12 14:32:27.500262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.754 [2024-07-12 14:32:27.500278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.754 [2024-07-12 14:32:27.500286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.754 [2024-07-12 14:32:27.500470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.754 [2024-07-12 14:32:27.500652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.754 [2024-07-12 14:32:27.500660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.754 [2024-07-12 14:32:27.500667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.754 [2024-07-12 14:32:27.503531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.754 [2024-07-12 14:32:27.512925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.754 [2024-07-12 14:32:27.513347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.754 [2024-07-12 14:32:27.513364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.754 [2024-07-12 14:32:27.513371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.754 [2024-07-12 14:32:27.513555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.754 [2024-07-12 14:32:27.513732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.754 [2024-07-12 14:32:27.513741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.754 [2024-07-12 14:32:27.513747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.754 [2024-07-12 14:32:27.514150] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.754 [2024-07-12 14:32:27.514175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.754 [2024-07-12 14:32:27.514182] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.754 [2024-07-12 14:32:27.514188] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.754 [2024-07-12 14:32:27.514194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.754 [2024-07-12 14:32:27.514234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:35.754 [2024-07-12 14:32:27.514318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:35.754 [2024-07-12 14:32:27.514320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.754 [2024-07-12 14:32:27.516617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.754 [2024-07-12 14:32:27.526014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.754 [2024-07-12 14:32:27.526455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.754 [2024-07-12 14:32:27.526475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.754 [2024-07-12 14:32:27.526483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.754 [2024-07-12 14:32:27.526662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.754 [2024-07-12 14:32:27.526840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.754 [2024-07-12 14:32:27.526849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.754 [2024-07-12 14:32:27.526855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.754 [2024-07-12 14:32:27.529692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.754 [2024-07-12 14:32:27.539081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.754 [2024-07-12 14:32:27.539523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.754 [2024-07-12 14:32:27.539541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.754 [2024-07-12 14:32:27.539549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.754 [2024-07-12 14:32:27.539727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.754 [2024-07-12 14:32:27.539903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.754 [2024-07-12 14:32:27.539911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.754 [2024-07-12 14:32:27.539918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.754 [2024-07-12 14:32:27.542754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.754 [2024-07-12 14:32:27.552288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.754 [2024-07-12 14:32:27.552734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.754 [2024-07-12 14:32:27.552753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.754 [2024-07-12 14:32:27.552761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.754 [2024-07-12 14:32:27.552940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.754 [2024-07-12 14:32:27.553120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.754 [2024-07-12 14:32:27.553128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.754 [2024-07-12 14:32:27.553135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.754 [2024-07-12 14:32:27.555973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.754 [2024-07-12 14:32:27.565434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.754 [2024-07-12 14:32:27.565895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.754 [2024-07-12 14:32:27.565912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.754 [2024-07-12 14:32:27.565920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.754 [2024-07-12 14:32:27.566097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.754 [2024-07-12 14:32:27.566275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.754 [2024-07-12 14:32:27.566283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.754 [2024-07-12 14:32:27.566290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.754 [2024-07-12 14:32:27.569130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.754 [2024-07-12 14:32:27.578520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.754 [2024-07-12 14:32:27.578937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.754 [2024-07-12 14:32:27.578953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.754 [2024-07-12 14:32:27.578961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.754 [2024-07-12 14:32:27.579139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.754 [2024-07-12 14:32:27.579317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.754 [2024-07-12 14:32:27.579325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.754 [2024-07-12 14:32:27.579332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.754 [2024-07-12 14:32:27.582171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.754 [2024-07-12 14:32:27.591731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.754 [2024-07-12 14:32:27.592167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.754 [2024-07-12 14:32:27.592183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.754 [2024-07-12 14:32:27.592191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.754 [2024-07-12 14:32:27.592369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.754 [2024-07-12 14:32:27.592555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.754 [2024-07-12 14:32:27.592564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.754 [2024-07-12 14:32:27.592570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.754 [2024-07-12 14:32:27.595407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.754 [2024-07-12 14:32:27.604776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.754 [2024-07-12 14:32:27.605214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.754 [2024-07-12 14:32:27.605229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.755 [2024-07-12 14:32:27.605236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.755 [2024-07-12 14:32:27.605423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.755 [2024-07-12 14:32:27.605601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.755 [2024-07-12 14:32:27.605610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.755 [2024-07-12 14:32:27.605616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.755 [2024-07-12 14:32:27.608456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.755 [2024-07-12 14:32:27.617852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.755 [2024-07-12 14:32:27.618290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.755 [2024-07-12 14:32:27.618307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.755 [2024-07-12 14:32:27.618314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.755 [2024-07-12 14:32:27.618496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.755 [2024-07-12 14:32:27.618674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.755 [2024-07-12 14:32:27.618682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.755 [2024-07-12 14:32:27.618689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.755 [2024-07-12 14:32:27.621522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.755 [2024-07-12 14:32:27.630891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.755 [2024-07-12 14:32:27.631301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.755 [2024-07-12 14:32:27.631317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.755 [2024-07-12 14:32:27.631324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.755 [2024-07-12 14:32:27.631507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.755 [2024-07-12 14:32:27.631685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.755 [2024-07-12 14:32:27.631693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.755 [2024-07-12 14:32:27.631700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.755 [2024-07-12 14:32:27.634534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.755 [2024-07-12 14:32:27.644066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.755 [2024-07-12 14:32:27.644503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.755 [2024-07-12 14:32:27.644519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.755 [2024-07-12 14:32:27.644526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.755 [2024-07-12 14:32:27.644704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.755 [2024-07-12 14:32:27.644882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.755 [2024-07-12 14:32:27.644890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.755 [2024-07-12 14:32:27.644900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.755 [2024-07-12 14:32:27.647739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.755 [2024-07-12 14:32:27.657127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.755 [2024-07-12 14:32:27.657499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.755 [2024-07-12 14:32:27.657515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.755 [2024-07-12 14:32:27.657522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.755 [2024-07-12 14:32:27.657700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.755 [2024-07-12 14:32:27.657876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.755 [2024-07-12 14:32:27.657884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.755 [2024-07-12 14:32:27.657890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.755 [2024-07-12 14:32:27.660740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.755 [2024-07-12 14:32:27.670296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.755 [2024-07-12 14:32:27.670649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.755 [2024-07-12 14:32:27.670665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.755 [2024-07-12 14:32:27.670672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.755 [2024-07-12 14:32:27.670848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.755 [2024-07-12 14:32:27.671026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.755 [2024-07-12 14:32:27.671034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.755 [2024-07-12 14:32:27.671040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.755 [2024-07-12 14:32:27.673878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.755 [2024-07-12 14:32:27.683416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.755 [2024-07-12 14:32:27.683830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.755 [2024-07-12 14:32:27.683846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.755 [2024-07-12 14:32:27.683853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.755 [2024-07-12 14:32:27.684029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.755 [2024-07-12 14:32:27.684207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.755 [2024-07-12 14:32:27.684215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.755 [2024-07-12 14:32:27.684221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.755 [2024-07-12 14:32:27.687056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.755 [2024-07-12 14:32:27.696592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.755 [2024-07-12 14:32:27.697010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.755 [2024-07-12 14:32:27.697030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.755 [2024-07-12 14:32:27.697036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.755 [2024-07-12 14:32:27.697213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.755 [2024-07-12 14:32:27.697396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.755 [2024-07-12 14:32:27.697404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.755 [2024-07-12 14:32:27.697411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.755 [2024-07-12 14:32:27.700241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.755 [2024-07-12 14:32:27.709767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.755 [2024-07-12 14:32:27.710185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.755 [2024-07-12 14:32:27.710201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.755 [2024-07-12 14:32:27.710208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.755 [2024-07-12 14:32:27.710391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.755 [2024-07-12 14:32:27.710570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.755 [2024-07-12 14:32:27.710578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.755 [2024-07-12 14:32:27.710584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.755 [2024-07-12 14:32:27.713419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.755 [2024-07-12 14:32:27.722944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.755 [2024-07-12 14:32:27.723362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.755 [2024-07-12 14:32:27.723382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.755 [2024-07-12 14:32:27.723389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.755 [2024-07-12 14:32:27.723566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.755 [2024-07-12 14:32:27.723743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.755 [2024-07-12 14:32:27.723751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.755 [2024-07-12 14:32:27.723757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.755 [2024-07-12 14:32:27.726588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.755 [2024-07-12 14:32:27.736123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.755 [2024-07-12 14:32:27.736533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.755 [2024-07-12 14:32:27.736549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.755 [2024-07-12 14:32:27.736556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.755 [2024-07-12 14:32:27.736734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.755 [2024-07-12 14:32:27.736914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.755 [2024-07-12 14:32:27.736922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.755 [2024-07-12 14:32:27.736929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.755 [2024-07-12 14:32:27.739760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.756 [2024-07-12 14:32:27.749285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.756 [2024-07-12 14:32:27.749703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.756 [2024-07-12 14:32:27.749719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:35.756 [2024-07-12 14:32:27.749725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:35.756 [2024-07-12 14:32:27.749903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:35.756 [2024-07-12 14:32:27.750080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.756 [2024-07-12 14:32:27.750089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.756 [2024-07-12 14:32:27.750095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.756 [2024-07-12 14:32:27.752931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.099 [2024-07-12 14:32:27.762489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.099 [2024-07-12 14:32:27.762905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.099 [2024-07-12 14:32:27.762921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.099 [2024-07-12 14:32:27.762928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.099 [2024-07-12 14:32:27.763105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.099 [2024-07-12 14:32:27.763284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.099 [2024-07-12 14:32:27.763292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.099 [2024-07-12 14:32:27.763298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.099 [2024-07-12 14:32:27.766129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.099 [2024-07-12 14:32:27.775653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.099 [2024-07-12 14:32:27.776006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.099 [2024-07-12 14:32:27.776021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.099 [2024-07-12 14:32:27.776028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.099 [2024-07-12 14:32:27.776205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.099 [2024-07-12 14:32:27.776387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.099 [2024-07-12 14:32:27.776395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.099 [2024-07-12 14:32:27.776401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.099 [2024-07-12 14:32:27.779233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.099 [2024-07-12 14:32:27.788768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.099 [2024-07-12 14:32:27.789178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.099 [2024-07-12 14:32:27.789193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.099 [2024-07-12 14:32:27.789200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.099 [2024-07-12 14:32:27.789383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.099 [2024-07-12 14:32:27.789560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.099 [2024-07-12 14:32:27.789568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.099 [2024-07-12 14:32:27.789574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.099 [2024-07-12 14:32:27.792403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.099 [2024-07-12 14:32:27.801944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.099 [2024-07-12 14:32:27.802359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.099 [2024-07-12 14:32:27.802374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.099 [2024-07-12 14:32:27.802384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.099 [2024-07-12 14:32:27.802561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.099 [2024-07-12 14:32:27.802737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.099 [2024-07-12 14:32:27.802745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.099 [2024-07-12 14:32:27.802751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.099 [2024-07-12 14:32:27.805585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.099 [2024-07-12 14:32:27.815111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.099 [2024-07-12 14:32:27.815524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.099 [2024-07-12 14:32:27.815540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.099 [2024-07-12 14:32:27.815547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.099 [2024-07-12 14:32:27.815724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.099 [2024-07-12 14:32:27.815900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.099 [2024-07-12 14:32:27.815908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.099 [2024-07-12 14:32:27.815914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.099 [2024-07-12 14:32:27.818752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.099 [2024-07-12 14:32:27.828298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.099 [2024-07-12 14:32:27.828720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.099 [2024-07-12 14:32:27.828737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.099 [2024-07-12 14:32:27.828747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.099 [2024-07-12 14:32:27.828924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.099 [2024-07-12 14:32:27.829103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.099 [2024-07-12 14:32:27.829111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.099 [2024-07-12 14:32:27.829117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.099 [2024-07-12 14:32:27.831948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.099 [2024-07-12 14:32:27.841477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.099 [2024-07-12 14:32:27.841894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.099 [2024-07-12 14:32:27.841909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.099 [2024-07-12 14:32:27.841916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.099 [2024-07-12 14:32:27.842092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.099 [2024-07-12 14:32:27.842270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.099 [2024-07-12 14:32:27.842278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.099 [2024-07-12 14:32:27.842284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.099 [2024-07-12 14:32:27.845122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.099 [2024-07-12 14:32:27.854685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.099 [2024-07-12 14:32:27.855114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.099 [2024-07-12 14:32:27.855129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.099 [2024-07-12 14:32:27.855135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.099 [2024-07-12 14:32:27.855312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.099 [2024-07-12 14:32:27.855494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.099 [2024-07-12 14:32:27.855503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.099 [2024-07-12 14:32:27.855509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.099 [2024-07-12 14:32:27.858338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.099 [2024-07-12 14:32:27.867885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.099 [2024-07-12 14:32:27.868323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.099 [2024-07-12 14:32:27.868338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.099 [2024-07-12 14:32:27.868345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.099 [2024-07-12 14:32:27.868526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.099 [2024-07-12 14:32:27.868703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.099 [2024-07-12 14:32:27.868714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.099 [2024-07-12 14:32:27.868720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.099 [2024-07-12 14:32:27.871554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.099 [2024-07-12 14:32:27.881139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.099 [2024-07-12 14:32:27.881489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.099 [2024-07-12 14:32:27.881504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.099 [2024-07-12 14:32:27.881511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.099 [2024-07-12 14:32:27.881687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.099 [2024-07-12 14:32:27.881864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.099 [2024-07-12 14:32:27.881872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.099 [2024-07-12 14:32:27.881878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.099 [2024-07-12 14:32:27.884712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.099 [2024-07-12 14:32:27.894238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.099 [2024-07-12 14:32:27.894655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.099 [2024-07-12 14:32:27.894671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.099 [2024-07-12 14:32:27.894678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.099 [2024-07-12 14:32:27.894854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.099 [2024-07-12 14:32:27.895031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.099 [2024-07-12 14:32:27.895039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.099 [2024-07-12 14:32:27.895045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.099 [2024-07-12 14:32:27.897876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.099 [2024-07-12 14:32:27.907401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.099 [2024-07-12 14:32:27.907813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.099 [2024-07-12 14:32:27.907829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.099 [2024-07-12 14:32:27.907836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.099 [2024-07-12 14:32:27.908012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.099 [2024-07-12 14:32:27.908190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.099 [2024-07-12 14:32:27.908198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.099 [2024-07-12 14:32:27.908204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.099 [2024-07-12 14:32:27.911038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.099 [2024-07-12 14:32:27.920571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.099 [2024-07-12 14:32:27.920988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.099 [2024-07-12 14:32:27.921003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.099 [2024-07-12 14:32:27.921009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.099 [2024-07-12 14:32:27.921186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.099 [2024-07-12 14:32:27.921363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.099 [2024-07-12 14:32:27.921371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.099 [2024-07-12 14:32:27.921382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.099 [2024-07-12 14:32:27.924209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.099 [2024-07-12 14:32:27.933739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.099 [2024-07-12 14:32:27.934148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.099 [2024-07-12 14:32:27.934163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.099 [2024-07-12 14:32:27.934170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.099 [2024-07-12 14:32:27.934346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.099 [2024-07-12 14:32:27.934528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.099 [2024-07-12 14:32:27.934536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.099 [2024-07-12 14:32:27.934542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.099 [2024-07-12 14:32:27.937379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.099 [2024-07-12 14:32:27.946909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.099 [2024-07-12 14:32:27.947300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.099 [2024-07-12 14:32:27.947316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.099 [2024-07-12 14:32:27.947323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.099 [2024-07-12 14:32:27.947503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.099 [2024-07-12 14:32:27.947679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.099 [2024-07-12 14:32:27.947687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.099 [2024-07-12 14:32:27.947693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.099 [2024-07-12 14:32:27.950523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.099 [2024-07-12 14:32:27.960059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.099 [2024-07-12 14:32:27.960471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.099 [2024-07-12 14:32:27.960487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.099 [2024-07-12 14:32:27.960494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.099 [2024-07-12 14:32:27.960674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.099 [2024-07-12 14:32:27.960851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.099 [2024-07-12 14:32:27.960859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.099 [2024-07-12 14:32:27.960866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.099 [2024-07-12 14:32:27.963700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.100 [2024-07-12 14:32:27.973231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.100 [2024-07-12 14:32:27.973656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.100 [2024-07-12 14:32:27.973672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.100 [2024-07-12 14:32:27.973678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.100 [2024-07-12 14:32:27.973854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.100 [2024-07-12 14:32:27.974030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.100 [2024-07-12 14:32:27.974038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.100 [2024-07-12 14:32:27.974044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.100 [2024-07-12 14:32:27.976877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.100 [2024-07-12 14:32:27.986411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.100 [2024-07-12 14:32:27.986828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.100 [2024-07-12 14:32:27.986843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.100 [2024-07-12 14:32:27.986850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.100 [2024-07-12 14:32:27.987026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.100 [2024-07-12 14:32:27.987208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.100 [2024-07-12 14:32:27.987216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.100 [2024-07-12 14:32:27.987222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.100 [2024-07-12 14:32:27.990055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.100 [2024-07-12 14:32:27.999583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.100 [2024-07-12 14:32:27.999997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.100 [2024-07-12 14:32:28.000012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.100 [2024-07-12 14:32:28.000019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.100 [2024-07-12 14:32:28.000196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.100 [2024-07-12 14:32:28.000374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.100 [2024-07-12 14:32:28.000386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.100 [2024-07-12 14:32:28.000395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.100 [2024-07-12 14:32:28.003223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.100 [2024-07-12 14:32:28.012765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.100 [2024-07-12 14:32:28.013227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.100 [2024-07-12 14:32:28.013243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.100 [2024-07-12 14:32:28.013251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.100 [2024-07-12 14:32:28.013433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.100 [2024-07-12 14:32:28.013611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.100 [2024-07-12 14:32:28.013619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.100 [2024-07-12 14:32:28.013625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.100 [2024-07-12 14:32:28.016457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.100 [2024-07-12 14:32:28.025834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.100 [2024-07-12 14:32:28.026183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.100 [2024-07-12 14:32:28.026200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.100 [2024-07-12 14:32:28.026207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.100 [2024-07-12 14:32:28.026389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.100 [2024-07-12 14:32:28.026568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.100 [2024-07-12 14:32:28.026576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.100 [2024-07-12 14:32:28.026582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.100 [2024-07-12 14:32:28.029416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.100 [2024-07-12 14:32:28.038949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.100 [2024-07-12 14:32:28.039387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.100 [2024-07-12 14:32:28.039402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.100 [2024-07-12 14:32:28.039409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.100 [2024-07-12 14:32:28.039586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.100 [2024-07-12 14:32:28.039762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.100 [2024-07-12 14:32:28.039770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.100 [2024-07-12 14:32:28.039776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.100 [2024-07-12 14:32:28.042615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.100 [2024-07-12 14:32:28.052145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.100 [2024-07-12 14:32:28.052592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.100 [2024-07-12 14:32:28.052608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.100 [2024-07-12 14:32:28.052615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.100 [2024-07-12 14:32:28.052792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.100 [2024-07-12 14:32:28.052969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.100 [2024-07-12 14:32:28.052977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.100 [2024-07-12 14:32:28.052983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.100 [2024-07-12 14:32:28.055821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.100 [2024-07-12 14:32:28.065209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.100 [2024-07-12 14:32:28.065652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.100 [2024-07-12 14:32:28.065668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.100 [2024-07-12 14:32:28.065675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.100 [2024-07-12 14:32:28.065855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.100 [2024-07-12 14:32:28.066031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.100 [2024-07-12 14:32:28.066039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.100 [2024-07-12 14:32:28.066045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.100 [2024-07-12 14:32:28.068897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.100 [2024-07-12 14:32:28.078310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.100 [2024-07-12 14:32:28.078744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.100 [2024-07-12 14:32:28.078762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.100 [2024-07-12 14:32:28.078770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.100 [2024-07-12 14:32:28.078947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.100 [2024-07-12 14:32:28.079126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.100 [2024-07-12 14:32:28.079135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.100 [2024-07-12 14:32:28.079141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.100 [2024-07-12 14:32:28.081975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.100 [2024-07-12 14:32:28.091512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.100 [2024-07-12 14:32:28.091880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.100 [2024-07-12 14:32:28.091897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.100 [2024-07-12 14:32:28.091904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.100 [2024-07-12 14:32:28.092084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.100 [2024-07-12 14:32:28.092262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.100 [2024-07-12 14:32:28.092272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.100 [2024-07-12 14:32:28.092278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.100 [2024-07-12 14:32:28.095112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.361 [2024-07-12 14:32:28.104657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.361 [2024-07-12 14:32:28.105101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.361 [2024-07-12 14:32:28.105119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.361 [2024-07-12 14:32:28.105126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.361 [2024-07-12 14:32:28.105305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.361 [2024-07-12 14:32:28.105488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.361 [2024-07-12 14:32:28.105498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.361 [2024-07-12 14:32:28.105505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.361 [2024-07-12 14:32:28.108338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.361 [2024-07-12 14:32:28.117721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.361 [2024-07-12 14:32:28.118090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.361 [2024-07-12 14:32:28.118106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.361 [2024-07-12 14:32:28.118115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.361 [2024-07-12 14:32:28.118292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.361 [2024-07-12 14:32:28.118476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.361 [2024-07-12 14:32:28.118486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.361 [2024-07-12 14:32:28.118492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.361 [2024-07-12 14:32:28.121323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.361 [2024-07-12 14:32:28.130870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.361 [2024-07-12 14:32:28.131246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.361 [2024-07-12 14:32:28.131264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.361 [2024-07-12 14:32:28.131272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.361 [2024-07-12 14:32:28.131454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.361 [2024-07-12 14:32:28.131633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.361 [2024-07-12 14:32:28.131643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.361 [2024-07-12 14:32:28.131653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.361 [2024-07-12 14:32:28.134488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.361 [2024-07-12 14:32:28.144026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.361 [2024-07-12 14:32:28.144443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.361 [2024-07-12 14:32:28.144460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.361 [2024-07-12 14:32:28.144467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.361 [2024-07-12 14:32:28.144644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.361 [2024-07-12 14:32:28.144823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.361 [2024-07-12 14:32:28.144832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.361 [2024-07-12 14:32:28.144839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.361 [2024-07-12 14:32:28.147668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.361 [2024-07-12 14:32:28.157213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.361 [2024-07-12 14:32:28.157654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.361 [2024-07-12 14:32:28.157671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.361 [2024-07-12 14:32:28.157678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.361 [2024-07-12 14:32:28.157856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.361 [2024-07-12 14:32:28.158032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.361 [2024-07-12 14:32:28.158041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.361 [2024-07-12 14:32:28.158047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.361 [2024-07-12 14:32:28.160885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.361 [2024-07-12 14:32:28.170255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.361 [2024-07-12 14:32:28.170654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.361 [2024-07-12 14:32:28.170672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.361 [2024-07-12 14:32:28.170679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.361 [2024-07-12 14:32:28.170856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.361 [2024-07-12 14:32:28.171034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.361 [2024-07-12 14:32:28.171044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.361 [2024-07-12 14:32:28.171050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.361 [2024-07-12 14:32:28.173886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.361 [2024-07-12 14:32:28.183427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.361 [2024-07-12 14:32:28.183845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.361 [2024-07-12 14:32:28.183869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.361 [2024-07-12 14:32:28.183877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.361 [2024-07-12 14:32:28.184053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.361 [2024-07-12 14:32:28.184232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.361 [2024-07-12 14:32:28.184242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.361 [2024-07-12 14:32:28.184249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.361 [2024-07-12 14:32:28.187286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.362 [2024-07-12 14:32:28.196508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.362 [2024-07-12 14:32:28.196954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.362 [2024-07-12 14:32:28.196972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.362 [2024-07-12 14:32:28.196980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.362 [2024-07-12 14:32:28.197158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.362 [2024-07-12 14:32:28.197338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.362 [2024-07-12 14:32:28.197348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.362 [2024-07-12 14:32:28.197355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.362 [2024-07-12 14:32:28.200198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.362 [2024-07-12 14:32:28.209585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.362 [2024-07-12 14:32:28.209961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.362 [2024-07-12 14:32:28.209980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.362 [2024-07-12 14:32:28.209987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.362 [2024-07-12 14:32:28.210165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.362 [2024-07-12 14:32:28.210344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.362 [2024-07-12 14:32:28.210354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.362 [2024-07-12 14:32:28.210360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.362 [2024-07-12 14:32:28.213195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.362 [2024-07-12 14:32:28.222743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.362 [2024-07-12 14:32:28.223186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.362 [2024-07-12 14:32:28.223203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.362 [2024-07-12 14:32:28.223215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.362 [2024-07-12 14:32:28.223397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.362 [2024-07-12 14:32:28.223577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.362 [2024-07-12 14:32:28.223587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.362 [2024-07-12 14:32:28.223594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.362 [2024-07-12 14:32:28.226429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.362 [2024-07-12 14:32:28.230938] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.362 [2024-07-12 14:32:28.235802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:36.362 [2024-07-12 14:32:28.236242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.362 [2024-07-12 14:32:28.236260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.362 [2024-07-12 14:32:28.236267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.362 [2024-07-12 14:32:28.236448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.362 [2024-07-12 14:32:28.236627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.362 [2024-07-12 14:32:28.236637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.362 [2024-07-12 14:32:28.236643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.362 [2024-07-12 14:32:28.239482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.362 [2024-07-12 14:32:28.248860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.362 [2024-07-12 14:32:28.249302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.362 [2024-07-12 14:32:28.249319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.362 [2024-07-12 14:32:28.249327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.362 [2024-07-12 14:32:28.249511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.362 [2024-07-12 14:32:28.249690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.362 [2024-07-12 14:32:28.249700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.362 [2024-07-12 14:32:28.249707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.362 [2024-07-12 14:32:28.252546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.362 [2024-07-12 14:32:28.261929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.362 [2024-07-12 14:32:28.262297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.362 [2024-07-12 14:32:28.262315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.362 [2024-07-12 14:32:28.262323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.362 [2024-07-12 14:32:28.262506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.362 [2024-07-12 14:32:28.262687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.362 [2024-07-12 14:32:28.262697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.362 [2024-07-12 14:32:28.262704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.362 [2024-07-12 14:32:28.265540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.362 Malloc0 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.362 [2024-07-12 14:32:28.275098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.362 [2024-07-12 14:32:28.275482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.362 [2024-07-12 14:32:28.275500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.362 [2024-07-12 14:32:28.275507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.362 [2024-07-12 14:32:28.275686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.362 [2024-07-12 14:32:28.275864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.362 [2024-07-12 14:32:28.275875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.362 [2024-07-12 14:32:28.275883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.362 [2024-07-12 14:32:28.278720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.362 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.362 [2024-07-12 14:32:28.288266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.362 [2024-07-12 14:32:28.288642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.362 [2024-07-12 14:32:28.288659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55980 with addr=10.0.0.2, port=4420 00:27:36.362 [2024-07-12 14:32:28.288667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55980 is same with the state(5) to be set 00:27:36.362 [2024-07-12 14:32:28.288845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55980 (9): Bad file descriptor 00:27:36.363 [2024-07-12 14:32:28.289022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.363 [2024-07-12 14:32:28.289036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.363 [2024-07-12 14:32:28.289043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.363 [2024-07-12 14:32:28.291881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.363 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.363 14:32:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:36.363 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.363 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.363 [2024-07-12 14:32:28.296087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.363 14:32:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.363 14:32:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2698143 00:27:36.363 [2024-07-12 14:32:28.301440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.621 [2024-07-12 14:32:28.448574] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:46.608 00:27:46.608 Latency(us) 00:27:46.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.608 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:46.608 Verification LBA range: start 0x0 length 0x4000 00:27:46.608 Nvme1n1 : 15.01 8038.47 31.40 12996.50 0.00 6065.32 594.81 21997.30 00:27:46.608 =================================================================================================================== 00:27:46.608 Total : 8038.47 31.40 12996.50 0.00 6065.32 594.81 21997.30 00:27:46.608 14:32:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:46.608 rmmod nvme_tcp 00:27:46.608 rmmod nvme_fabrics 00:27:46.608 rmmod nvme_keyring 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2699188 ']' 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2699188 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2699188 ']' 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2699188 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2699188 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2699188' 00:27:46.608 killing process with pid 2699188 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2699188 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2699188 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:46.608 14:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.545 14:32:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:47.545 00:27:47.545 real 0m26.303s 00:27:47.545 user 1m3.516s 00:27:47.545 sys 0m6.209s 00:27:47.545 14:32:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:47.546 14:32:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:47.546 ************************************ 00:27:47.546 END TEST nvmf_bdevperf 00:27:47.546 ************************************ 00:27:47.546 14:32:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:47.546 14:32:39 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:47.546 14:32:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:47.546 14:32:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.546 14:32:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:47.546 ************************************ 00:27:47.546 START TEST nvmf_target_disconnect 00:27:47.546 ************************************ 00:27:47.546 14:32:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:47.546 * Looking for test storage... 00:27:47.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:47.546 14:32:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:47.546 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:47.546 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:47.546 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:47.546 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:47.546 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:47.546 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:47.546 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:47.546 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:47.546 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:47.546 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:47.546 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:47.805 14:32:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:53.079 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:53.079 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.079 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:53.079 Found net devices under 0000:86:00.0: cvl_0_0 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:53.080 Found net devices under 0000:86:00.1: cvl_0_1 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:53.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:27:53.080 00:27:53.080 --- 10.0.0.2 ping statistics --- 00:27:53.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.080 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:53.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:27:53.080 00:27:53.080 --- 10.0.0.1 ping statistics --- 00:27:53.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.080 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:53.080 ************************************ 00:27:53.080 START TEST nvmf_target_disconnect_tc1 00:27:53.080 ************************************ 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:53.080 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.080 [2024-07-12 14:32:44.872745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.080 [2024-07-12 14:32:44.872789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1163e60 with addr=10.0.0.2, port=4420 00:27:53.080 [2024-07-12 14:32:44.872810] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:53.080 [2024-07-12 14:32:44.872822] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:53.080 [2024-07-12 14:32:44.872828] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:53.080 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:53.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:53.080 Initializing NVMe Controllers 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:53.080 00:27:53.080 real 0m0.084s 00:27:53.080 user 0m0.032s 00:27:53.080 sys 0m0.052s 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:53.080 ************************************ 00:27:53.080 END TEST nvmf_target_disconnect_tc1 00:27:53.080 ************************************ 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:53.080 ************************************ 00:27:53.080 START TEST nvmf_target_disconnect_tc2 00:27:53.080 ************************************ 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2704141 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2704141 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2704141 ']' 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:53.080 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.081 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:53.081 14:32:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.081 [2024-07-12 14:32:45.006945] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:27:53.081 [2024-07-12 14:32:45.006986] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.081 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.081 [2024-07-12 14:32:45.078995] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:53.339 [2024-07-12 14:32:45.152278] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.339 [2024-07-12 14:32:45.152319] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.339 [2024-07-12 14:32:45.152326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.339 [2024-07-12 14:32:45.152332] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.339 [2024-07-12 14:32:45.152336] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.339 [2024-07-12 14:32:45.152469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:53.339 [2024-07-12 14:32:45.152576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:53.339 [2024-07-12 14:32:45.152664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:53.339 [2024-07-12 14:32:45.152665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.906 Malloc0 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.906 [2024-07-12 14:32:45.877195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.906 [2024-07-12 14:32:45.902151] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.906 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:54.165 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.165 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2704387 00:27:54.165 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:54.165 14:32:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:54.165 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.073 14:32:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2704141 00:27:56.073 14:32:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 [2024-07-12 14:32:47.930897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Write completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 [2024-07-12 14:32:47.931098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.073 Read completed with error (sct=0, sc=8) 00:27:56.073 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 [2024-07-12 14:32:47.931289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Read completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 Write completed with error (sct=0, sc=8) 00:27:56.074 starting I/O failed 00:27:56.074 [2024-07-12 14:32:47.931505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.074 [2024-07-12 14:32:47.931763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.931781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.931931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.931942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.932162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.932194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.932399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.932432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.932638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.932669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.932858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.932889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.933120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.933162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.933636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.933657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.933850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.933867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.933982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.933995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.934170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.934182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.934415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.934428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.934604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.934615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.934771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.934783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.934861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.934872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.935051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.935064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.935205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.935217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.935296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.935307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.935426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.935439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.935655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.935667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.074 [2024-07-12 14:32:47.935922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.074 [2024-07-12 14:32:47.935934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.074 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.936096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.936108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.936321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.936339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.936521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.936534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.936697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.936709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.936863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.936874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.937036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.937049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.937258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.937270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.937350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.937361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.937486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.937498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.937737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.937749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.937948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.937960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.938144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.938156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.938334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.938345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.938507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.938519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.938676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.938691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.938844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.938856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.939006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.939018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.939181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.939193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.939399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.939411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.939496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.939506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.939604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.939615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.939710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.939722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.939952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.939964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.940205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.940236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.940436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.940468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.940688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.940720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.940913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.940925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.941140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.941171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.941353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.941391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.941579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.941611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.941882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.941914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.942185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.942216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.942512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.942544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.942756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.942788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.942942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.942973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.943186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.943217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.943415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.943447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.943637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.943669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.943886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.943918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.075 qpair failed and we were unable to recover it. 00:27:56.075 [2024-07-12 14:32:47.944173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.075 [2024-07-12 14:32:47.944204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.944330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.944362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.944598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.944633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.944909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.944925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.945235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.945267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.945546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.945579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.945847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.945878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.946149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.946181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.946320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.946336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.946453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.946469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.946617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.946633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.946869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.946884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.947006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.947022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.947173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.947189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.947383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.947399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.947501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.947515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.947673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.947689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.947935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.947951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.948100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.948116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.948286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.948302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.948409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.948426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.948587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.948603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.948812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.948828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.948977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.948992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.949173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.949204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.949397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.949429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.949618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.949649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.949873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.949904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.950187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.950203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.950474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.950490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.950705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.950737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.950876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.950909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.951109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.951140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.951411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.951443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.951579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.951610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.951757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.951789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.951922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.951953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.952211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.952226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.952460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.952476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-12 14:32:47.952599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.076 [2024-07-12 14:32:47.952614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.952779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.952794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.953017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.953033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.953258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.953276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.953478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.953495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.953730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.953746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.954000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.954015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.954174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.954189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.954420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.954435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.954694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.954710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.954802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.954816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.954934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.954948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.955121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.955152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.955418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.955450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.955696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.955727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.956000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.956031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.956241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.956273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.956493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.956525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.956781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.956812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.957000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.957032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.957221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.957265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.957434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.957451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.957686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.957703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.957893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.957924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.958109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.958140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.958363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.958415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.958661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.958693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.958894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.958925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.959174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.959205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-12 14:32:47.959468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.077 [2024-07-12 14:32:47.959500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.959770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.959801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.960027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.960071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.960283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.960298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.960443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.960459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.960701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.960732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.960859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.960890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.961211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.961242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.961528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.961560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.961761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.961792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.961971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.961987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.962218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.962249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.962487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.962519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.962654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.962685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.962879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.962916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.963124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.963155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.963330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.963346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.963503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.963534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.963755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.963786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.964055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.964088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.964396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.964428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.964633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.964664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.964909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.964941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.965194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.965225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.965352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.965393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.965592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.965624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.965815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.965846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.966068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.966084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.966309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.966340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.966648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.966681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.966943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.966975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.967270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.967301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.967507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.967538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.967726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.967757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.967953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.967984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.968256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.968287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.968537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.968570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-12 14:32:47.968704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-07-12 14:32:47.968735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.968915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.968931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.969165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.969181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.969371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.969429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.969686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.969717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.969917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.969948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.970127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.970159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.970302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.970317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.970483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.970514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.970639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.970671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.970865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.970896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.971090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.971121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.971449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.971481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.971732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.971762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.971977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.972020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.972115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.972130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.972300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.972316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.972401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.972423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.972595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.972627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.972895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.972927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.973182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.973214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.973434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.973466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.973647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.973678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.973890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.973921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.974106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.974138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.974374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.974395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.974488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.974520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.974791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.974822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.975073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.975103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.975362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.975403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.975700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.975732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.976002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.976033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.976322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.976353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.976638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.976670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.976872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.976904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.977038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.977082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.977342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.977358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.977553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.977570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.977808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.977840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.978063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.978096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.978219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.978250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.079 qpair failed and we were unable to recover it. 00:27:56.079 [2024-07-12 14:32:47.978420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.079 [2024-07-12 14:32:47.978437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.978613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.978645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.978926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.978957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.979209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.979241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.979367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.979410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.979631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.979663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.979846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.979879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.980126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.980157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.980446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.980491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.980764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.980796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.980996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.981028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.981286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.981302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.981449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.981465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.981616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.981647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.981863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.981895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.982072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.982104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.982357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.982375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.982675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.982706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.982891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.982922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.983186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.983224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.983365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.983386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.983549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.983565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.983710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.983725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.983934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.983965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.984219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.984250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.984537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.984554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.984637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.984651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.984800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.984816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.984995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.985010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.985173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.985204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.985470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.985503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.985693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.985725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.985920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.985951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.986076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.986108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.986250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.986281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.986463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.986495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.986764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.986813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.987005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.987037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.987226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.987258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.987380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.987396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.080 [2024-07-12 14:32:47.987618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.080 [2024-07-12 14:32:47.987650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.080 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.987779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.987810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.987924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.987965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.988139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.988155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.988331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.988362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.988644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.988676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.988922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.988954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.989067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.989099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.989224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.989256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.989495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.989511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.989727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.989758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.990026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.990056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.990197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.990214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.990312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.990327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.990500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.990516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.990748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.990780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.990911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.990947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.991135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.991167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.991347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.991363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.991627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.991643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.991809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.991825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.991930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.991961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.992206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.992237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.992509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.992541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.992739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.992770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.992964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.992995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.993187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.993203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.993459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.993491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.993595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.993627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.993819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.993850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.994044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.994075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.994216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.994232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.994369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.994435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.994635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.994666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.994852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.994882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.995071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.995103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.995275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.995307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.995448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.995488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.995725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.995740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.995826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.995840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.995927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.995942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.081 qpair failed and we were unable to recover it. 00:27:56.081 [2024-07-12 14:32:47.996100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.081 [2024-07-12 14:32:47.996115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.996283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.996314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.996441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.996473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.996596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.996627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.996807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.996838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.996989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.997020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.997198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.997230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.997441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.997473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.997690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.997722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.997959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.997990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.998190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.998205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.998367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.998408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.998519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.998551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.998758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.998790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.998927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.998959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.999203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.999240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.999351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.999389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.999572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.999604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.999719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.999750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:47.999947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:47.999978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:48.000173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:48.000204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:48.000385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:48.000416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:48.000725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:48.000759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:48.000870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:48.000901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:48.001080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:48.001111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:48.001373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:48.001398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:48.001553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:48.001568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:48.001745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:48.001776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:48.001955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:48.001986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:48.002300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:48.002331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:48.002531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:48.002563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:48.002758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:48.002790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:48.002967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:48.002998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:48.003126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:48.003141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:48.003288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:48.003303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.082 [2024-07-12 14:32:48.003493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.082 [2024-07-12 14:32:48.003525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.082 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.003745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.003777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.003958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.003990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.004184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.004215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.004353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.004394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.004512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.004544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.004719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.004768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.004897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.004929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.005053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.005084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.005328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.005359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.005506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.005537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.005715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.005746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.005863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.005895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.006140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.006171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.006349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.006389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.006594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.006625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.006750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.006781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.006994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.007025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.007201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.007233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.007477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.007509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.007704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.007740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.007930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.007961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.008175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.008191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.008361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.008401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.008619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.008649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.008775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.008806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.008987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.009019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.009309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.009340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.009526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.009542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.009737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.009768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.010023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.010055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.010346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.010385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.010639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.010670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.010783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.010815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.011089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.011120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.011315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.011346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.011576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.011592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.011767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.011798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.012043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.012075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.012261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.083 [2024-07-12 14:32:48.012292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.083 qpair failed and we were unable to recover it. 00:27:56.083 [2024-07-12 14:32:48.012494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.012509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.012746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.012777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.012988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.013019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.013144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.013187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.013272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.013287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.013502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.013518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.013630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.013646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.013740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.013754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.013901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.013942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.014164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.014195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.014446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.014479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.014605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.014636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.014761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.014793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.015036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.015068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.015247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.015278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.015547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.015578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.015768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.015799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.016008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.016040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.016160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.016176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.016361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.016382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.016494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.016515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.016617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.016631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.016718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.016732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.016809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.016823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.016988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.017019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.017147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.017178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.017288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.017321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.017526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.017542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.017635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.017649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.017810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.017826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.018056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.018071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.018311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.018343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.018476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.018509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.018715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.018746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.018930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.018961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.019090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.019121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.019299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.019330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.019483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.019516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.019721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.019752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.019947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.019978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.020261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.084 [2024-07-12 14:32:48.020276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.084 qpair failed and we were unable to recover it. 00:27:56.084 [2024-07-12 14:32:48.020388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.020404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.020561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.020591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.020792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.020824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.020960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.020992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.021261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.021292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.021486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.021519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.021786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.021817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.022020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.022052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.022298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.022330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.022468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.022500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.022767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.022799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.022980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.023011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.023208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.023239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.023425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.023440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.023589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.023620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.023802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.023833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.024012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.024044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.024211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.024227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.024385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.024401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.024614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.024633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.024730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.024745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.024840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.024854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.024995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.025011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.025178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.025194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.025296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.025312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.025499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.025532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.025665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.025696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.025820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.025851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.025977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.026009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.026305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.026337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.026547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.026578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.026719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.026751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.026951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.026984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.027185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.027216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.027328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.027359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.027651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.027667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.027777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.027793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.027955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.027971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.028148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.028163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.028341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.028357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.085 [2024-07-12 14:32:48.028520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.085 [2024-07-12 14:32:48.028553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.085 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.028752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.028783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.028976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.029008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.029279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.029295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.029458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.029475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.029636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.029651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.029841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.029911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.030120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.030154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.030357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.030403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.030623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.030635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.030765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.030776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.030956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.030988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.031099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.031131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.031325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.031356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.031579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.031591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.031741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.031772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.031986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.032018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.032158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.032189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.032305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.032336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.032600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.032647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.032847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.032878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.032991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.033022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.033232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.033263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.033436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.033478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.033700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.033711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.033790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.033800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.034032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.034063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.034268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.034299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.034513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.034525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.034676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.034689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.034829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.034841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.035053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.035064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.035229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.035241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.035397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.035430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.035633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.035664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.035774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.086 [2024-07-12 14:32:48.035805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.086 qpair failed and we were unable to recover it. 00:27:56.086 [2024-07-12 14:32:48.035938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.035969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.036235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.036266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.036482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.036514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.036658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.036689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.036986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.037016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.037166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.037178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.037349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.037392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.037581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.037612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.037792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.037823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.038089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.038119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.038375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.038442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.038616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.038633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.038743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.038774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.038970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.039002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.039250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.039280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.039461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.039478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.039669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.039700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.039825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.039857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.040071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.040110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.040264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.040280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.040602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.040638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.040789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.040821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.041027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.041059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.041236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.041252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.041521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.041554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.041695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.041726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.041923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.041954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.042096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.042128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.042397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.042429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.042609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.042641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.042828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.042860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.042987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.043018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.043233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.043264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.043462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.043497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.043737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.043752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.043934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.043950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.044061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.044091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.044223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.044261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.044449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.044482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.044665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.044697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.044819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.044850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.087 qpair failed and we were unable to recover it. 00:27:56.087 [2024-07-12 14:32:48.045053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.087 [2024-07-12 14:32:48.045084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.045277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.045308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.045504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.045536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.045713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.045744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.045873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.045904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.046092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.046123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.046390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.046422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.046536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.046567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.046702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.046734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.047003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.047034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.047238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.047269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.047462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.047497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.047694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.047725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.047950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.047981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.048158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.048195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.048342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.048357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.048527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.048559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.048737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.048768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.048988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.049019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.049154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.049170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.049311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.049355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.049494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.049527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.049721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.049753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.049890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.049927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.050123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.050154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.050343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.050359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.050530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.050561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.050696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.050728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.050916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.050947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.051118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.051134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.051275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.051290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.051458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.051475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.051580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.051611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.051895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.051927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.052107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.052138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.052376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.052396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.052534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.052550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.052769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.052800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.052994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.053025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.053165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.053196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.053387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.053403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.088 [2024-07-12 14:32:48.053641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.088 [2024-07-12 14:32:48.053673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.088 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.053857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.053888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.054001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.054032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.054211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.054242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.054357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.054396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.054573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.054604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.054784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.054814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.055082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.055113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.055397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.055437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.055630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.055666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.055806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.055837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.056031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.056062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.056174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.056205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.056417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.056450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.056678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.056693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.056847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.056863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.057025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.057057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.057273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.057305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.057485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.057517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.057621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.057636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.057832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.057863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.058110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.058141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.058253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.058284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.058406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.058422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.058578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.058594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.058708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.058739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.058952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.058983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.059162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.059193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.059318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.059335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.059566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.059583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.059677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.059692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.059873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.059905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.060023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.060055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.060231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.060263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.060546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.060577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.060702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.060733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.060924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.060955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.061164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.061195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.061408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.061440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.061631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.061663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.061925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.061956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.062222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.089 [2024-07-12 14:32:48.062253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.089 qpair failed and we were unable to recover it. 00:27:56.089 [2024-07-12 14:32:48.062365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.062404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.062605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.062637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.062836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.062866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.063157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.063188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.063401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.063440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.063588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.063620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.063865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.063897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.064166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.064197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.064327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.064359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.064579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.064595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.064752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.064783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.065028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.065060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.065190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.065220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.065417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.065450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.065652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.065683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.065865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.065896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.066038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.066069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.066311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.066343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.066531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.066564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.066779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.066809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.066929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.066960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.067143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.067158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.067408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.067450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.067587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.067619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.067810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.067841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.067954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.067985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.068204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.068236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.068374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.068434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.068611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.068627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.068715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.068729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.068868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.068884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.069124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.069140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.069230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.069244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.069413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.069445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.069568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.069599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.069791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.090 [2024-07-12 14:32:48.069827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.090 qpair failed and we were unable to recover it. 00:27:56.090 [2024-07-12 14:32:48.069958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.069989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.070120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.070151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.070294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.070325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.070574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.070605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.070723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.070754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.070885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.070900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.070996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.071010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.071239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.071254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.071487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.071504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.071585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.071600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.071700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.071714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.071816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.071830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.071917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.071931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.072093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.072108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.072261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.072277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.072429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.072445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.072604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.072619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.091 [2024-07-12 14:32:48.072709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.091 [2024-07-12 14:32:48.072724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.091 qpair failed and we were unable to recover it. 00:27:56.372 [2024-07-12 14:32:48.072932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.372 [2024-07-12 14:32:48.072948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.372 qpair failed and we were unable to recover it. 00:27:56.372 [2024-07-12 14:32:48.073024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.372 [2024-07-12 14:32:48.073038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.372 qpair failed and we were unable to recover it. 00:27:56.372 [2024-07-12 14:32:48.073139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.372 [2024-07-12 14:32:48.073153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.372 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.073246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.073260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.073486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.073503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.073582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.073595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.073763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.073780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.073944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.073959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.074058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.074076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.074162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.074177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.074265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.074280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.074427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.074443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.074606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.074622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.074685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.074699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.074838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.074853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.075014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.075030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.075118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.075132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.075289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.075304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.075459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.075495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.075618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.075649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.075769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.075800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.075994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.076025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.076238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.076270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.076463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.076479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.076619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.076634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.076783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.076814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.076951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.076982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.077175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.077206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.077339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.077355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.077590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.077606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.077761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.077777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.077863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.077894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.078007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.078038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.078183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.078215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.078326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.078358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.078498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.078529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.078727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.078758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.078944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.078975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.079215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.079231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.079406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.079423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.079639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.079670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.079851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.079882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.080060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.080092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.080290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.373 [2024-07-12 14:32:48.080321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.373 qpair failed and we were unable to recover it. 00:27:56.373 [2024-07-12 14:32:48.080514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.080547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.080683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.080714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.080970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.081000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.081192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.081223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.081348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.081386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.081562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.081631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.081884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.081954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.082173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.082208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.082398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.082432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.082647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.082662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.082754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.082785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.082964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.082995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.083238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.083269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.083375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.083398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.083497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.083513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.083653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.083669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.083753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.083767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.083925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.083955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.084132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.084172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.084367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.084413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.084539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.084555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.084711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.084747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.084957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.084989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.085173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.085204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.085345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.085389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.085501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.085533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.085782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.085814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.085933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.085963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.086091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.086122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.086398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.086431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.086565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.086596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.086743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.086775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.087056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.087086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.087297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.087329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.087474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.087506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.087619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.087635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.087871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.087886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.088106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.088137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.088322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.088353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.088504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.088535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.374 [2024-07-12 14:32:48.088722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.374 [2024-07-12 14:32:48.088738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.374 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.088829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.088845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.088947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.088963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.089110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.089125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.089216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.089231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.089431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.089475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.089622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.089653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.089901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.089932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.090060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.090093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.090273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.090305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.090412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.090429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.090672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.090704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.090951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.090983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.091178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.091209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.091423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.091438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.091520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.091534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.091674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.091705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.091979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.092010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.092184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.092224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.092420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.092451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.092547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.092563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.092731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.092762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.093007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.093038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.093234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.093265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.093385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.093401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.093503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.093519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.093798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.093830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.094098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.094129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.094256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.094287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.094477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.094492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.094575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.094591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.094671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.094685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.094794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.094810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.094983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.095014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.095155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.095186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.095458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.095474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.095557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.095571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.095674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.095689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.095853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.095869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.096015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.096046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.375 [2024-07-12 14:32:48.096239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.375 [2024-07-12 14:32:48.096270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.375 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.096415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.096446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.096584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.096614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.096882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.096913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.097039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.097070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.097242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.097312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.097594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.097663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.097845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.097863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.097950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.097964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.098074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.098089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.098261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.098291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.098432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.098464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.098587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.098618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.098808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.098839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.098970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.099001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.099118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.099149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.099341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.099372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.099575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.099591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.099665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.099679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.099825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.099841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.099993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.100008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.100188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.100204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.100305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.100321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.100416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.100431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.100546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.100562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.100643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.100657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.100756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.100770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.100842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.100856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.101093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.101108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.101317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.101332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.101541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.101557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.101648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.101662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.101904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.101935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.102073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.102104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.102228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.102259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.102452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.102485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.102599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.102630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.102769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.102799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.103044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.103075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.103274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.103305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.103419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.376 [2024-07-12 14:32:48.103435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.376 qpair failed and we were unable to recover it. 00:27:56.376 [2024-07-12 14:32:48.103654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.103685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.103821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.103853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.104063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.104094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.104215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.104246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.104431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.104450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.104650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.104682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.104898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.104929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.105115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.105146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.105412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.105428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.105585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.105600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.105688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.105703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.105778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.105792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.105938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.105954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.106110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.106126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.106334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.106346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.106433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.106445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.106520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.106530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.106609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.106620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.106725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.106755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.106899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.106930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.107109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.107140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.107307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.107338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.107487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.107519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.107665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.107697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.107880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.107911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.108044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.108076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.108268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.108299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.108426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.108458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.108620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.108632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.108840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.108872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.109000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.109031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.109288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.109320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.109456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.109488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.109676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.377 [2024-07-12 14:32:48.109708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.377 qpair failed and we were unable to recover it. 00:27:56.377 [2024-07-12 14:32:48.109910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.109941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.110118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.110149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.110273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.110304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.110501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.110513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.110654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.110666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.110818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.110830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.110980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.111011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.111207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.111237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.111414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.111445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.111562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.111574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.111729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.111742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.112000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.112032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.112208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.112240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.112423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.112435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.112651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.112683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.112881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.112912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.113159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.113190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.113396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.113409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.113577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.113608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.113815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.113846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.114040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.114071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.114266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.114297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.114424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.114458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.114679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.114709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.114986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.115017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.115143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.115174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.115298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.115329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.115533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.115566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.115753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.115784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.115963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.115995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.116239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.116271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.116452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.116484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.116680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.116711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.116844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.116875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.117000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.117031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.117134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.117165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.117296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.117327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.117584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.117616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.117830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.117861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.118081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.118111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.378 [2024-07-12 14:32:48.118375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.378 [2024-07-12 14:32:48.118417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.378 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.118640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.118670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.118847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.118859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.118967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.118999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.119125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.119157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.119387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.119420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.119545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.119576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.119823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.119854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.120114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.120145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.120368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.120408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.120587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.120623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.120793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.120805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.120909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.120940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.121063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.121094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.121362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.121409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.121656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.121667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.121810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.121821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.121982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.122014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.122144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.122175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.122285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.122317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.122453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.122485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.122681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.122723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.122806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.122817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.122893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.122903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.123065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.123096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.123227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.123257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.123370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.123411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.123590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.123621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.123844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.123875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.124083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.124114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.124250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.124280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.124495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.124507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.124639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.124651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.124837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.124868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.124990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.125021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.125207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.125239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.125443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.125475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.125609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.125640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.125761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.125773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.125949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.125961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.126188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.126199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.379 [2024-07-12 14:32:48.126271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.379 [2024-07-12 14:32:48.126282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.379 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.126420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.126432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.126563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.126574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.126656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.126666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.126755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.126784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.126979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.127010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.127202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.127234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.127348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.127359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.127509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.127521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.127636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.127672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.127788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.127819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.128021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.128052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.128253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.128285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.128466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.128478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.128660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.128730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.128932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.128967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.129077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.129109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.129290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.129322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.129540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.129575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.129689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.129720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.129983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.129999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.130077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.130091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.130173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.130188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.130421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.130435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.130524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.130561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.130739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.130770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.130896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.130927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.131050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.131081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.131323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.131355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.131498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.131529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.131632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.131642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.131735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.131745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.131976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.132006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.132200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.132230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.132433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.132445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.132525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.132536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.132673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.132683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.132790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.132803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.132951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.132963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.133026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.133037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.133175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.133186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.380 qpair failed and we were unable to recover it. 00:27:56.380 [2024-07-12 14:32:48.133275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.380 [2024-07-12 14:32:48.133304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.133501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.133532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.133727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.133758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.133863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.133875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.133948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.133959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.134158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.134170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.134318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.134359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.134504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.134535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.134737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.134773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.134893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.134924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.135049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.135080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.135261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.135291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.135428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.135463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.135570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.135601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.135787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.135802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.135904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.135935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.136155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.136187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.136431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.136472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.136652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.136667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.136779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.136810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.137012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.137043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.137283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.137314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.137521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.137537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.137614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.137628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.137703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.137717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.137793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.137807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.137910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.137942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.138160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.138191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.138371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.138414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.138657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.138673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.138772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.138788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.138863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.138877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.381 [2024-07-12 14:32:48.138964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.381 [2024-07-12 14:32:48.138978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.381 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.139172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.139203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.139352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.139394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.139501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.139532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.139644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.139674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.139870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.139883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.140018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.140029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.140183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.140195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.140341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.140353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.140424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.140434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.140565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.140576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.140748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.140779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.140971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.141001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.141192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.141222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.141430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.141463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.141570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.141602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.141815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.141852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.142043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.142074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.142248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.142278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.142397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.142428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.142650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.142681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.142871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.142902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.143025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.143056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.143167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.143199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.143395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.143427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.143542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.143572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.143756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.143787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.144088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.144119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.144294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.144325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.144521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.144552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.144697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.144728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.144907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.144918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.144998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.145031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.145155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.145187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.145297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.145340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.145440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.145451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.145687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.145718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.145838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.145870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.146148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.146179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.146286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.146317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.146505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.146536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.382 qpair failed and we were unable to recover it. 00:27:56.382 [2024-07-12 14:32:48.146747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.382 [2024-07-12 14:32:48.146778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.146915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.146945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.147128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.147159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.147294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.147325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.147626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.147658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.147848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.147879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.148152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.148183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.148373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.148412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.148656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.148688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.148875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.148887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.149097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.149127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.149317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.149347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.149635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.149670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.149769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.149781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.149986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.149997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.150070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.150082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.150246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.150280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.150414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.150446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.150690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.150721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.150822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.150854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.151100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.151131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.151374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.151413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.151547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.151579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.151701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.151732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.151856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.151867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.152096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.152108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.152243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.152254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.152455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.152466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.152604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.152617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.152684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.152703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.152855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.152866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.153052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.153084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.153201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.153232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.153353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.153393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.153472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.153483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.153552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.153562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.153631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.153642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.153708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.153719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.153907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.153938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.154123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.154154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.154302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.383 [2024-07-12 14:32:48.154332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.383 qpair failed and we were unable to recover it. 00:27:56.383 [2024-07-12 14:32:48.154445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.154477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.154660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.154671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.154815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.154846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.155037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.155069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.155319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.155350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.155654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.155666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.155865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.155876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.156039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.156050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.156265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.156277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.156462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.156494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.156679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.156710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.156886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.156917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.157119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.157150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.157419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.157452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.157622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.157658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.157860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.157872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.158038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.158069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.158336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.158367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.158667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.158698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.158846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.158857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.158925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.158936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.159079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.159109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.159225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.159256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.159446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.159477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.159739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.159750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.159950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.159962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.160032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.160042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.160184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.160195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.160278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.160289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.160516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.160529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.160681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.160693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.160836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.160847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.160999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.161030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.161177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.161208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.161504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.161536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.161684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.161715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.161921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.161952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.162129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.162159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.162424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.162456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.162592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.162628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.384 [2024-07-12 14:32:48.162765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.384 [2024-07-12 14:32:48.162777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.384 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.162869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.162879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.163014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.163025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.163189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.163220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.163345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.163375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.163496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.163531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.163779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.163809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.164016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.164047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.164298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.164328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.164595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.164626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.164753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.164784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.164967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.164978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.165084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.165115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.165247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.165278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.165545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.165582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.165709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.165721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.165973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.166003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.166141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.166171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.166309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.166338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.166477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.166508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.166705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.166735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.166850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.166881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.167006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.167017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.167165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.167177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.167249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.167260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.167336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.167347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.167425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.167457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.167581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.167611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.167810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.167841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.168038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.168068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.168326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.168357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.168528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.168560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.168820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.168832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.168935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.168947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.169126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.169157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.169353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.385 [2024-07-12 14:32:48.169394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.385 qpair failed and we were unable to recover it. 00:27:56.385 [2024-07-12 14:32:48.169535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.169566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.169848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.169859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.169922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.169954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.170216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.170247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.170521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.170533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.170612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.170624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.170808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.170839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.171086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.171117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.171342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.171372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.171574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.171606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.171849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.171879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.172094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.172124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.172344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.172374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.172593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.172624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.172818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.172849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.172993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.173024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.173246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.173277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.173520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.173552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.173728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.173742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.173890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.173901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.174048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.174060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.174157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.174168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.174250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.174260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.174339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.174350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.174436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.174448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.174535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.174546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.174616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.174627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.174721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.174732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.174822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.174833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.174898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.174909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.175048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.175077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.175194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.175223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.175333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.175364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.175555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.175586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.175705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.175735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.175924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.175955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.176078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.176110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.176406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.176438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.176699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.176730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.176956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.176987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.177208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.386 [2024-07-12 14:32:48.177238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.386 qpair failed and we were unable to recover it. 00:27:56.386 [2024-07-12 14:32:48.177363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.177427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.177564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.177595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.177840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.177871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.178047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.178078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.178300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.178330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.178589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.178601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.178814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.178825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.178970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.178982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.179139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.179170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.179367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.179414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.179540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.179571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.179763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.179774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.179925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.179937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.180082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.180094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.180230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.180260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.180465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.180497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.180701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.180732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.180867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.180903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.181087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.181118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.181236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.181265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.181408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.181439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.181632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.181662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.181831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.181861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.181994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.182025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.182275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.182306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.182568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.182600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.182772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.182784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.182872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.182883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.182964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.182974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.183062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.183072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.183221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.183232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.183387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.183399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.183494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.183504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.183587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.183598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.183676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.183687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.183814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.183824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.183968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.183980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.184067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.184077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.184217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.184229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.184344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.184356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.387 [2024-07-12 14:32:48.184432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.387 [2024-07-12 14:32:48.184442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.387 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.184530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.184540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.184609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.184618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.184703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.184714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.184805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.184815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.185042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.185053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.185137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.185147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.185373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.185432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.185613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.185644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.185836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.185867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.186145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.186176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.186425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.186457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.186580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.186592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.186729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.186740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.186963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.186975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.187063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.187073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.187163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.187173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.187361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.187376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.187462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.187473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.187613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.187625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.187717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.187728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.187817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.187827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.187980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.187992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.188075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.188085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.188169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.188179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.188265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.188275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.188348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.188359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.188489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.188499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.188653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.188665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.188822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.188853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.189043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.189073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.189218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.189249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.189394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.189426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.189629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.189641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.189780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.189791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.189862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.189873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.190036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.190048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.190195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.190207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.190445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.190457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.190613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.190625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.190709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.190719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.388 [2024-07-12 14:32:48.190944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.388 [2024-07-12 14:32:48.190974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.388 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.191154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.191184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.191392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.191425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.191620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.191634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.191711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.191721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.191801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.191819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.192083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.192095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.192302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.192314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.192525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.192537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.192616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.192626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.192724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.192735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.192961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.192992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.193109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.193139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.193260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.193291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.193552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.193564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.193766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.193777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.193945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.193957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.194110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.194121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.194268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.194280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.194429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.194441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.194648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.194660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.194743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.194754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.194902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.194913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.195049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.195080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.195193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.195224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.195401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.195432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.195672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.195684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.195835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.195846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.195992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.196004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.196170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.196182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.196252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.196262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.196359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.196402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.196585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.196616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.196752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.196783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.196963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.196993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.197169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.197200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.197311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.197341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.197482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.197514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.197689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.197731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.197901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.197913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.198002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.389 [2024-07-12 14:32:48.198015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.389 qpair failed and we were unable to recover it. 00:27:56.389 [2024-07-12 14:32:48.198109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.198119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.198210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.198221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.198354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.198368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.198556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.198591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.198691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.198708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.198796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.198813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.198971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.198987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.199197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.199212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.199304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.199317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.199432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.199449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.199602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.199617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.199704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.199718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.199801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.199814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.199911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.199927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.200013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.200027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.200221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.200234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.200384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.200397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.200490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.200502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.200569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.200579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.200736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.200747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.200884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.200895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.200969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.200979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.201059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.201070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.201204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.201216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.201327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.201358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.201550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.390 [2024-07-12 14:32:48.201580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.390 qpair failed and we were unable to recover it. 00:27:56.390 [2024-07-12 14:32:48.201712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.201742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.201935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.201965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.202212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.202243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.202420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.202452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.202702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.202733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.202862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.202892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.203001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.203032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.203275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.203305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.203486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.203518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.203654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.203685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.203949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.203980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.204097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.204127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.204313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.204344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.204563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.204575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.204673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.204707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.204971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.205002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.205153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.205189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.205309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.205340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.205567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.205599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.205732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.205763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.206185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.206201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.206367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.206382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.206454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.206464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.206610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.206622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.206705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.206716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.206864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.206875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.206957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.206967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.207100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.207112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.207201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.207212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.207289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.207300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.207508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.207520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.207653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.207673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.207746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.207757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.207901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.207913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.208000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.208011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.208162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.208173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.208319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.208331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.208410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.208421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.208539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.208551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.208646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.391 [2024-07-12 14:32:48.208657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.391 qpair failed and we were unable to recover it. 00:27:56.391 [2024-07-12 14:32:48.208810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.208821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.208967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.208979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.209054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.209065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.209155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.209166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.209241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.209251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.209436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.209448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.209664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.209675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.209851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.209863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.210022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.210053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.210230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.210261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.210450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.210482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.210617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.210628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.210705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.210715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.210865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.210877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.211014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.211026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.211111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.211121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.211215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.211227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.211407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.211419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.211584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.211596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.211671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.211682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.211817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.211829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.212011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.212023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.212125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.212156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.212364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.212405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.212531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.212563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.212708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.212719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.212866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.212878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.212958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.212968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.213049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.213059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.213153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.213163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.213346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.213385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.213582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.213613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.213732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.213763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.213979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.213991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.214078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.214089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.214243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.214255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.214401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.214413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.214488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.214499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.214587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.214597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.214777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.214788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.214877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.392 [2024-07-12 14:32:48.214888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.392 qpair failed and we were unable to recover it. 00:27:56.392 [2024-07-12 14:32:48.214956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.214966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.215111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.215123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.215274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.215285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.215478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.215510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.215640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.215671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.215914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.215945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.216078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.216109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.216232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.216263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.216374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.216413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.216656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.216687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.216818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.216848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.217043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.217073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.217257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.217288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.217412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.217444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.217628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.217658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.217911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.217939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.218097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.218109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.218267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.218279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.218418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.218430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.218657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.218668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.218753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.218781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.218957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.218988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.219232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.219264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.219390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.219423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.219617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.219648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.219778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.219809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.219922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.219953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.220064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.220095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.220280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.220311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.220432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.220465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.220661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.220698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.220900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.220911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.220996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.221009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.221079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.221090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.221185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.221216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.221409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.221440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.221620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.221651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.221773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.221805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.221977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.222008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.222256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.393 [2024-07-12 14:32:48.222268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.393 qpair failed and we were unable to recover it. 00:27:56.393 [2024-07-12 14:32:48.222429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.222461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.222649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.222679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.222866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1226000 is same with the state(5) to be set 00:27:56.394 [2024-07-12 14:32:48.223108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.223143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.223302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.223320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.223429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.223445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.223627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.223642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.223791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.223806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.223899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.223915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.224021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.224037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.224195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.224227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.224417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.224449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.224630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.224662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.224793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.224808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.224950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.224966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.225118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.225133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.225237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.225253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.225396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.225412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.225507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.225523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.225710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.225725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.225817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.225833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.225991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.226006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.226093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.226108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.226356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.226399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.226644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.226675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.226806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.226822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.227043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.227075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.227261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.227292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.227414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.227455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.227623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.227642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.227808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.227840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.227975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.228007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.228197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.228228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.228341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.228372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.228534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.228565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.394 qpair failed and we were unable to recover it. 00:27:56.394 [2024-07-12 14:32:48.228743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.394 [2024-07-12 14:32:48.228774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.229045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.229076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.229259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.229291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.229486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.229517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.229735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.229766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.229877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.229909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.230030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.230061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.230265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.230298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.230499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.230531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.230777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.230808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.230994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.231009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.231157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.231172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.231420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.231452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.231593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.231625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.231872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.231903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.232075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.232090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.232254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.232270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.232484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.232500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.232658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.232673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.232837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.232852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.232998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.233013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.233132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.233168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.233255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.233269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.233363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.233375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.233580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.233592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.233659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.233669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.233750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.233761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.233845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.233855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.234000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.234031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.234141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.234172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.234308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.234339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.234458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.234490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.234668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.234700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.234872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.234883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.395 [2024-07-12 14:32:48.234952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.395 [2024-07-12 14:32:48.234964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.395 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.235046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.235057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.235210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.235222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.235387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.235399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.235549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.235562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.235715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.235727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.235889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.235901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.235972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.235983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.236184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.236196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.236364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.236376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.236530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.236542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.236647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.236659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.236866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.236877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.236952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.236962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.237115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.237126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.237273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.237285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.237424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.237435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.237575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.237587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.237668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.237679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.237748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.237759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.237917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.237948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.238077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.238108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.238246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.238278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.238413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.238446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.238585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.238616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.238907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.238937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.239098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.239110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.239332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.239353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.239460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.239476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.239556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.239570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.239671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.239685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.239853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.239868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.240013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.240045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.240164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.240195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.240464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.240496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.240617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.240649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.240779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.240794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.240978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.240994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.241136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.396 [2024-07-12 14:32:48.241152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.396 qpair failed and we were unable to recover it. 00:27:56.396 [2024-07-12 14:32:48.241260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.241275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.241389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.241413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.241649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.241665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.241806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.241821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.241905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.241919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.242127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.242142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.242239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.242254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.242396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.242413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.242497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.242511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.242662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.242678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.242834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.242850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.242954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.242993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.243169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.243200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.243315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.243346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.243502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.243547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.243687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.243700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.243903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.243914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.243986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.243996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.244202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.244213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.244298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.244309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.244391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.244401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.244479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.244490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.244702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.244733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.244943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.244974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.245092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.245122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.245246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.245277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.245419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.245451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.245644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.245675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.245902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.245971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.246190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.246208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.246301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.246331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.246627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.246660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.246888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.246904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.247072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.247088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.247235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.247249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.247523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.247555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.247757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.247794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.247973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.248005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.248216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.248231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.248321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.248337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.397 [2024-07-12 14:32:48.248434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.397 [2024-07-12 14:32:48.248449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.397 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.248598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.248616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.248829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.248844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.249005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.249021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.249190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.249205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.249353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.249393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.249572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.249604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.249715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.249746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.250005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.250020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.250111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.250126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.250391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.250422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.250513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.250529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.250686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.250702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.250808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.250824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.250923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.250938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.251162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.251193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.251404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.251436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.251616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.251654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.251744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.251758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.251933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.251948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.252110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.252125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.252335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.252369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.252512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.252544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.252723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.252754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.252934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.252966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.253115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.253127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.253211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.253222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.253289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.253299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.253388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.253408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.253578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.253594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.253693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.253709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.253800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.253816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.253915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.253930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.254088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.254103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.254191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.254207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.254364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.254385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.254475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.254491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.254576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.254591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.254685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.254701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.254866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.254881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.254980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.254996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.398 qpair failed and we were unable to recover it. 00:27:56.398 [2024-07-12 14:32:48.255211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.398 [2024-07-12 14:32:48.255226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.255373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.255394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.255530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.255546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.255618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.255632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.255749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.255764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.255850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.255865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.256016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.256047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.256250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.256281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.256471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.256503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.256756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.256771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.256914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.256930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.257117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.257133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.257309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.257325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.257565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.257596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.257719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.257755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.257933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.257964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.258076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.258107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.258234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.258265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.258478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.258510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.258699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.258730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.258959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.258975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.259125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.259140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.259366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.259420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.259629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.259660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.259786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.259817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.259959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.259974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.260139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.260154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.260245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.260276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.260551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.260584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.260828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.260844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.260998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.261014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.261163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.261178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.261411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.261442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.261613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.261643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.261916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.399 [2024-07-12 14:32:48.261946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.399 qpair failed and we were unable to recover it. 00:27:56.399 [2024-07-12 14:32:48.262138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.262154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.262370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.262409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.262543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.262575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.262691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.262721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.262899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.262930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.263185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.263216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.263412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.263448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.263644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.263675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.263802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.263832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.264040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.264071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.264263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.264294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.264404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.264437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.264695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.264725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.264972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.264987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.265151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.265182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.265289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.265319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.265569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.265603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.265813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.265829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.265936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.265967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.266236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.266268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.266546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.266585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.266760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.266776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.266923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.266954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.267222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.267252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.267444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.267476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.267590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.267621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.267872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.267888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.267992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.268008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.268181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.268212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.268399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.268431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.268579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.268610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.268870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.268901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.269032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.269047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.269126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.269142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.269306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.269322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.269486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.269502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.269660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.269676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.269779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.269794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.269880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.269895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.269976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.269990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.270076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.400 [2024-07-12 14:32:48.270090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.400 qpair failed and we were unable to recover it. 00:27:56.400 [2024-07-12 14:32:48.270201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.270215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.270364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.270427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.270622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.270653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.270789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.270820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.270933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.270963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.271087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.271116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.271368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.271446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.271651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.271686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.271889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.271921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.272050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.272082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.272207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.272238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.272371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.272417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.272598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.272636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.272714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.272728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.272879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.272894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.273006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.273021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.273168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.273184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.273279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.273293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.273455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.273471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.273538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.273556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.273717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.273732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.273887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.273918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.274024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.274055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.274301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.274332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.274590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.274622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.274729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.274759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.274960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.274976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.275149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.275164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.275251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.275265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.275414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.275431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.275515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.275529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.275738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.275754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.275901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.275916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.275995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.276010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.276170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.276186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.276361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.276381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.276463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.276477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.276622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.276637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.276714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.276728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.276895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.276910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.277058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.401 [2024-07-12 14:32:48.277074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.401 qpair failed and we were unable to recover it. 00:27:56.401 [2024-07-12 14:32:48.277160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.277174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.277332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.277348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.277427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.277442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.277654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.277669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.277746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.277760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.278022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.278050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.278133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.278145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.278246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.278256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.278345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.278355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.278524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.278536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.278710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.278722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.278819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.278831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.278981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.279012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.279136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.279167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.279348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.279392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.279518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.279549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.279742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.279773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.279974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.279986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.280159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.280174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.280261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.280292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.280487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.280519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.280642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.280673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.280886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.280917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.281025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.281067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.281200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.281211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.281360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.281371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.281544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.281555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.281705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.281717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.281864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.281876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.281955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.281965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.282061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.282073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.282278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.282309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.282509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.282541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.282714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.282727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.282800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.282810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.282881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.282891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.282967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.282978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.283112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.283123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.283280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.283292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.283432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.283444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.402 qpair failed and we were unable to recover it. 00:27:56.402 [2024-07-12 14:32:48.283532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.402 [2024-07-12 14:32:48.283542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.283627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.283637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.283724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.283735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.283814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.283824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.283934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.283963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.284190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.284257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.284456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.284493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.284643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.284674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.284791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.284822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.285084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.285100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.285262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.285277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.285440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.285458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.285681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.285712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.285908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.285939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.286186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.286218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.286409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.286440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.286649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.286681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.286925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.286940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.287142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.287161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.287389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.287422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.287608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.287640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.287854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.287885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.288107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.288123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.288337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.288352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.288499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.288515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.288669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.288685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.288914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.288945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.289079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.289110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.289235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.289266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.289450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.289486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.289677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.289709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.289849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.289880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.403 [2024-07-12 14:32:48.290095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.403 [2024-07-12 14:32:48.290111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.403 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.290258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.290273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.290368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.290388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.290483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.290499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.290657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.290673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.290780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.290796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.290953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.290985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.291097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.291129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.291253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.291284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.291413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.291446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.291564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.291595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.291776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.291806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.291983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.291999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.292166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.292191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.292279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.292296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.292404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.292417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.292636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.292647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.292865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.292896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.293140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.293171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.293289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.293320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.293455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.293487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.293607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.293639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.293766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.293797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.294064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.294076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.294149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.294160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.294385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.294397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.294572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.294586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.294753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.294764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.294907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.294918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.295067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.295081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.295172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.295183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.295328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.295340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.295434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.295446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.295594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.295605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.295741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.295753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.295838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.295849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.296019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.296031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.296117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.296128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.296190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.404 [2024-07-12 14:32:48.296200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.404 qpair failed and we were unable to recover it. 00:27:56.404 [2024-07-12 14:32:48.296296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.296307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.296463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.296496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.296613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.296644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.296751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.296782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.296980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.297011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.297122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.297152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.297339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.297371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.297493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.297524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.297688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.297719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.297840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.297851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.297955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.297967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.298120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.298131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.298224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.298234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.298299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.298310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.298523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.298560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.298689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.298720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.298839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.298877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.299011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.299022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.299156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.299167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.299261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.299272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.299427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.299439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.299521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.299531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.299626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.299637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.299702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.299712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.299792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.299802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.299967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.299978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.300111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.300122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.300209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.300219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.300297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.300308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.300394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.300405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.300540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.300551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.300688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.300700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.300834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.300845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.301032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.301063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.301186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.301216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.301396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.301427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.301649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.301680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.301825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.301856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.302016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.405 [2024-07-12 14:32:48.302046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.405 qpair failed and we were unable to recover it. 00:27:56.405 [2024-07-12 14:32:48.302152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.302183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.302432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.302463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.302672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.302703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.302927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.302958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.303135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.303165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.303352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.303394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.303611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.303642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.303830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.303860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.304040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.304071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.304320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.304351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.304680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.304744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.304917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.304935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.305050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.305066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.305168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.305185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.305265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.305278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.306422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.306453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.306559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.306576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.306654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.306668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.306822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.306838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.306950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.306991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.307184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.307215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.307412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.307444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.307568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.307598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.307732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.307763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.307882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.307912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.308057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.308088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.308359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.308374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.308550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.308565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.308721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.308737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.308834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.308850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.309010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.309040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.309221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.309252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.309503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.309540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.309662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.309693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.309800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.309831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.309997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.310013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.310164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.310180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.406 [2024-07-12 14:32:48.310268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.406 [2024-07-12 14:32:48.310282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.406 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.310423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.310439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.310585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.310600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.310760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.310775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.310866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.310880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.310974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.310990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.311086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.311102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.311264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.311294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.311414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.311446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.311648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.311680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.311794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.311825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.311990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.312005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.312115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.312130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.312353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.312369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.312470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.312484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.312650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.312665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.312771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.312786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.312932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.312948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.313103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.313118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.313206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.313216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.313365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.313382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.313538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.313549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.313616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.313627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.313717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.313728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.313864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.313875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.313973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.313983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.314187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.314199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.314285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.314296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.314391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.314432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.314618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.314649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.314774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.314805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.314918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.314948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.315050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.315061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.315218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.315231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.315305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.315316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.315400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.315411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.315637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.315649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.315719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.407 [2024-07-12 14:32:48.315730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.407 qpair failed and we were unable to recover it. 00:27:56.407 [2024-07-12 14:32:48.315891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.315903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.315979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.315989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.316064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.316075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.316212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.316224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.316302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.316312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.316397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.316407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.316493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.316503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.316560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.316570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.316641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.316651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.316848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.316860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.316938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.316949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.317032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.317043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.317120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.317131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.317212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.317223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.317281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.317291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.317376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.317389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.317459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.317469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.317557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.317568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.317638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.317649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.317717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.317727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.317861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.317873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.317947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.317959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.318144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.318172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.318353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.318400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.318506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.318537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.318717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.318747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.318872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.318882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.319026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.319037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.319185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.319197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.319338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.319349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.319518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.319530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.319678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.319708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.319898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.408 [2024-07-12 14:32:48.319929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.408 qpair failed and we were unable to recover it. 00:27:56.408 [2024-07-12 14:32:48.320039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.320070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.320348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.320360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.320506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.320518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.320603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.320615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.320890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.320921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.321093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.321123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.321399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.321432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.321615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.321646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.321780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.321811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.321933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.321945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.322108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.322120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.322190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.322226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.322406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.322436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.322564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.322594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.322814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.322844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.322973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.323003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.323188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.323219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.323403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.323436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.323622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.323654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.323780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.323812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.323987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.324018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.324180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.324192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.324338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.324350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.324441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.324469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.324583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.324614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.324810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.324841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.324958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.324988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.325737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.325761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.325861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.325873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.326031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.326042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.326118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.326128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.326327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.326339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.326511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.326523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.326694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.326706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.326776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.326786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.326929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.326941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.327097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.327110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.327245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.327257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.327341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.327352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.327452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.327463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.409 [2024-07-12 14:32:48.327669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.409 [2024-07-12 14:32:48.327682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.409 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.327783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.327795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.327956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.327968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.328045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.328055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.328123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.328133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.328195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.328205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.328310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.328320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.328417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.328430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.328513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.328523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.328612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.328624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.328836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.328847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.328921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.328931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.329008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.329019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.329158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.329170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.329331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.329342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.329481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.329493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.329627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.329639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.329722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.329733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.329834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.329845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.329941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.329953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.330092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.330104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.330179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.330189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.330262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.330272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.330348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.330360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.330464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.330475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.330549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.330561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.330649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.330659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.330733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.330745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.330953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.330964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.331096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.331107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.331193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.331204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.331268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.331279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.331345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.331356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.331536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.331550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.331691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.331702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.331779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.331791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.331922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.410 [2024-07-12 14:32:48.331933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.410 qpair failed and we were unable to recover it. 00:27:56.410 [2024-07-12 14:32:48.332017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.332027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.332093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.332103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.332246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.332257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.332341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.332351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.332449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.332461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.332539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.332549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.332774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.332786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.332850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.332860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.332996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.333007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.333094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.333107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.333175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.333185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.333345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.333356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.333423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.333434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.333519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.333529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.333594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.333604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.333666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.333676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.333753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.333764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.333841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.333852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.334028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.334039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.334123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.334133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.334217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.334229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.334306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.334317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.334394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.334405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.334474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.334484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.334568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.334578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.334646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.334657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.334715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.334726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.334824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.334835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.334907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.334921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.334989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.335000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.335079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.335092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.335163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.335174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.335309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.335320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.335402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.335413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.335480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.335490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.335560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.335571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.335705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.335716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.335853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.335865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.335954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.335964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.336107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.336118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.336252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.411 [2024-07-12 14:32:48.336264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.411 qpair failed and we were unable to recover it. 00:27:56.411 [2024-07-12 14:32:48.336345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.336356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.336524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.336536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.336729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.336741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.336819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.336830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.336979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.336991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.337193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.337205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.337282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.337293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.337382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.337393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.337472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.337483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.337559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.337570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.337634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.337644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.337782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.337792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.337860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.337870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.338007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.338018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.338099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.338109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.338210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.338222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.338280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.338291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.338458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.338470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.338564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.338576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.338642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.338652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.338789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.338801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.338943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.338955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.339031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.339041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.339121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.339131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.339264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.339275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.339340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.339351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.339411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.339421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.339478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.339488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.339561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.339572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.339638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.339650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.339742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.339752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.339835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.339846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.339913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.339923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.340061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.340072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.412 [2024-07-12 14:32:48.340154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.412 [2024-07-12 14:32:48.340165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.412 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.340233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.340244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.340402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.340414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.340529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.340540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.340675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.340686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.340764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.340775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.340856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.340868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.341005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.341016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.341081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.341092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.341160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.341172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.341395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.341406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.341510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.341521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.341585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.341596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.341666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.341677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.341831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.341842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.341940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.341951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.342194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.342205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.342286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.342298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.342371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.342394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.342452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.342462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.342550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.342560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.342626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.342636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.342716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.342728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.342857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.342869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.342935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.342945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.343095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.343106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.343197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.343208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.343288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.343300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.343449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.343461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.343533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.343543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.343614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.343624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.343761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.343772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.343912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.343924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.344010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.344022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.344155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.344166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.344236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.344249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.344335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.344346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.344421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.344432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.344560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.344571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.344653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.344664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.344747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.344758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.413 qpair failed and we were unable to recover it. 00:27:56.413 [2024-07-12 14:32:48.344858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.413 [2024-07-12 14:32:48.344870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.344964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.344975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.345078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.345089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.345157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.345167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.345231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.345242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.345322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.345334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.345480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.345492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.345571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.345585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.345655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.345665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.345745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.345757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.345991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.346002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.346072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.346084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.346168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.346179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.346263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.346274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.346439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.346453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.346591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.346603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.346680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.346691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.346841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.346852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.346922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.346933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.347072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.347083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.347290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.347302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.347493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.347528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.347655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.347689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.347794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.347812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.347904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.347919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.348017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.348033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.348120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.348135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.348212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.348227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.348315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.348330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.348431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.348448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.348663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.348679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.348824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.348839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.348990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.349005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.349149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.349164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.349325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.349344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.349491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.349507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.349667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.349682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.349759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.349773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.349848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.349864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.349949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.349964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.350121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.414 [2024-07-12 14:32:48.350136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.414 qpair failed and we were unable to recover it. 00:27:56.414 [2024-07-12 14:32:48.350395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.350410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.350506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.350521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.350610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.350625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.350701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.350716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.350798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.350811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.350948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.350959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.351026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.351038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.351135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.351146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.351219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.351231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.351310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.351321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.351484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.351495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.351568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.351579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.351661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.351672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.351745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.351756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.351825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.351836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.351997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.352008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.352114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.352125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.352213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.352224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.352360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.352371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.352447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.352459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.352647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.352666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.352835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.352850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.353020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.353035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.353116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.353131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.353288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.353303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.353387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.353402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.353488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.353502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.353592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.353607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.353725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.353741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.353882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.353895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.353966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.353977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.354138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.354149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.354220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.354232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.354395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.354408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.354552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.354563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.354632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.354643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.354714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.354726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.354860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.354872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.354941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.354954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.355027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.355039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.415 [2024-07-12 14:32:48.355183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.415 [2024-07-12 14:32:48.355195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.415 qpair failed and we were unable to recover it. 00:27:56.416 [2024-07-12 14:32:48.355344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.416 [2024-07-12 14:32:48.355355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.416 qpair failed and we were unable to recover it. 00:27:56.416 [2024-07-12 14:32:48.355414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.416 [2024-07-12 14:32:48.355425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.416 qpair failed and we were unable to recover it. 00:27:56.416 [2024-07-12 14:32:48.355505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.416 [2024-07-12 14:32:48.355516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.416 qpair failed and we were unable to recover it. 00:27:56.416 [2024-07-12 14:32:48.355618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.416 [2024-07-12 14:32:48.355630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.416 qpair failed and we were unable to recover it. 00:27:56.416 [2024-07-12 14:32:48.355724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.416 [2024-07-12 14:32:48.355736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.416 qpair failed and we were unable to recover it. 00:27:56.416 [2024-07-12 14:32:48.355819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.416 [2024-07-12 14:32:48.355831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.416 qpair failed and we were unable to recover it. 00:27:56.416 [2024-07-12 14:32:48.355911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.416 [2024-07-12 14:32:48.355923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.416 qpair failed and we were unable to recover it. 00:27:56.416 [2024-07-12 14:32:48.356015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.416 [2024-07-12 14:32:48.356026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.416 qpair failed and we were unable to recover it. 00:27:56.416 [2024-07-12 14:32:48.356110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.416 [2024-07-12 14:32:48.356122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.416 qpair failed and we were unable to recover it. 00:27:56.416 [2024-07-12 14:32:48.356211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.416 [2024-07-12 14:32:48.356222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.416 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.356311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.356322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.356399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.356410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.356480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.356492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.356640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.356652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.356798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.356810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.356962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.356974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.357065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.357077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.357158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.357169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.357253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.357269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.357407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.357418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.357494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.357505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.357662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.357673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.357825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.357837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.357920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.357931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.357997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.358008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.358144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.358155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.358256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.358268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.358358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.358369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.358571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.358583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.358727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.358738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.358809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.358821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.358912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.358924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.358996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.359010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.359087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.359098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.359264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.359275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.359424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.359436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.359618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.359629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.706 [2024-07-12 14:32:48.359780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.706 [2024-07-12 14:32:48.359792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.706 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.359874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.359886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.359979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.359990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.360066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.360077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.360144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.360156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.360239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.360250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.360323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.360334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.360408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.360422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.360562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.360572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.360660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.360671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.360817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.360828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.360963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.360973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.361057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.361068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.361168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.361179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.361274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.361284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.361354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.361364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.361458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.361470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.361576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.361586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.361652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.361662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.361802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.361812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.361985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.361995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.362141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.362152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.362323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.362342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.362445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.362460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.362553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.362568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.362727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.362743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.362820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.362835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.362922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.362936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.363027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.363041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.363137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.363150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.707 [2024-07-12 14:32:48.363297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.707 [2024-07-12 14:32:48.363311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.707 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.363471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.363487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.363568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.363582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.363705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.363719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.363819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.363833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.363920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.363933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.364022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.364033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.364121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.364131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.364199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.364209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.364291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.364301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.364450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.364460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.364541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.364551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.364639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.364650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.364736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.364746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.364897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.364907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.364977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.364987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.365149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.365160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.365287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.365297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.365449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.365460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.365599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.365610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.365691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.365702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.365780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.365791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.365926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.365937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.366018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.366028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.366103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.366113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.366256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.366267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.366415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.366427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.366496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.366507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.366586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.366597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.366736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.366747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.708 [2024-07-12 14:32:48.366815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.708 [2024-07-12 14:32:48.366825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.708 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.366887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.366897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.366990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.367003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.367146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.367156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.367240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.367250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.367336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.367346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.367436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.367446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.367528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.367539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.367621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.367631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.367697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.367707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.367777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.367788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.367862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.367872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.368080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.368090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.368159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.368170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.368306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.368317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.368396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.368408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.368480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.368490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.368571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.368582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.368652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.368662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.368750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.368760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.368839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.368849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.368936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.368946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.369033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.369044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.369119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.369129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.369196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.369207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.369285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.369295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.369385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.369395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.369462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.369473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.369543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.369553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.369628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.709 [2024-07-12 14:32:48.369638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.709 qpair failed and we were unable to recover it. 00:27:56.709 [2024-07-12 14:32:48.369707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.369717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.369827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.369836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.369915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.369926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.370012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.370022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.370098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.370109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.370187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.370198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.370286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.370296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.370440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.370451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.370545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.370556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.370638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.370648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.370730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.370741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.370925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.370936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.371003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.371015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.371097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.371107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.371177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.371188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.371345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.371356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.371435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.371446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.371514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.371524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.371607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.371617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.371744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.371755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.371830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.371841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.371918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.371929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.372133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.372143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.372276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.372286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.372373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.372388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.372449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.372459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.372604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.372614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.372684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.372695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.372764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.372775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.710 [2024-07-12 14:32:48.372872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.710 [2024-07-12 14:32:48.372882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.710 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.372984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.372996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.373073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.373088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.373165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.373180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.373260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.373274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.373353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.373368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.373451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.373468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.373672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.373691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.373772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.373785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.373859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.373873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.373957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.373969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.374050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.374061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.374134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.374144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.374213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.374224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.374295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.374304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.374451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.374463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.374544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.374555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.374623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.374633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.374703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.374714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.374789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.374799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.374866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.374876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.374955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.374965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.375099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.375111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.375180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.711 [2024-07-12 14:32:48.375193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.711 qpair failed and we were unable to recover it. 00:27:56.711 [2024-07-12 14:32:48.375269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.375280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.375364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.375374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.375458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.375469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.375540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.375551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.375687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.375698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.375799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.375811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.375895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.375906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.375977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.375987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.376109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.376120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.376199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.376209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.376281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.376292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.376359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.376369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.376448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.376459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.376529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.376539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.376611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.376622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.376690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.376700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.376845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.376856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.376937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.376947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.377023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.377034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.377107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.377117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.377190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.377200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.377267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.377277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.377351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.377361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.377432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.377442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.377516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.377526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.377596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.377607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.377672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.377682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.377750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.377760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.377842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.377852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.377990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.712 [2024-07-12 14:32:48.378000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.712 qpair failed and we were unable to recover it. 00:27:56.712 [2024-07-12 14:32:48.378069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.378080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.378273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.378284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.378364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.378375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.378453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.378464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.378530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.378541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.378610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.378621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.378690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.378700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.378780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.378789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.378921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.378933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.379002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.379014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.379082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.379092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.379179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.379189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.379321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.379333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.379409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.379421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.379484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.379495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.379670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.379683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.379761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.379771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.379910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.379921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.379992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.380002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.380068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.380078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.380150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.380160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.380313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.380326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.380401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.380412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.380610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.380623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.380719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.380732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.380885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.380897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.380969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.380980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.381049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.381060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.381141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.381151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.381230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.713 [2024-07-12 14:32:48.381241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.713 qpair failed and we were unable to recover it. 00:27:56.713 [2024-07-12 14:32:48.381315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.381327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.381442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.381454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.381525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.381538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.381622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.381633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.381707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.381719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.381792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.381803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.381874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.381885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.382027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.382039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.382111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.382122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.382326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.382338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.382415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.382427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.382569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.382580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.382672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.382684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.382765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.382777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.382850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.382861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.382951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.382963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.383045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.383057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.383131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.383142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.383222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.383234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.383317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.383330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.383467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.383479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.383556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.383568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.383716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.383728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.383865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.383876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.383944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.383956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.384024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.384035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.384205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.384216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.384354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.384366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.384441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.384454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.384522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.714 [2024-07-12 14:32:48.384534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.714 qpair failed and we were unable to recover it. 00:27:56.714 [2024-07-12 14:32:48.384593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.384604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.384687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.384698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.384774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.384785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.384861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.384880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.384953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.384964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.385045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.385056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.385113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.385124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.385204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.385215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.385285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.385297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.385384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.385397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.385469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.385480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.385552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.385564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.385699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.385711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.385771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.385781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.385932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.385944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.386013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.386024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.386093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.386104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.386237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.715 [2024-07-12 14:32:48.386248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.715 qpair failed and we were unable to recover it. 00:27:56.715 [2024-07-12 14:32:48.386395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.716 [2024-07-12 14:32:48.386408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.716 qpair failed and we were unable to recover it. 00:27:56.716 [2024-07-12 14:32:48.386496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.716 [2024-07-12 14:32:48.386507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.716 qpair failed and we were unable to recover it. 00:27:56.716 [2024-07-12 14:32:48.386601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.716 [2024-07-12 14:32:48.386612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.716 qpair failed and we were unable to recover it. 00:27:56.716 [2024-07-12 14:32:48.386695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.716 [2024-07-12 14:32:48.386707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.716 qpair failed and we were unable to recover it. 00:27:56.716 [2024-07-12 14:32:48.386848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.716 [2024-07-12 14:32:48.386859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.716 qpair failed and we were unable to recover it. 00:27:56.716 [2024-07-12 14:32:48.387017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.716 [2024-07-12 14:32:48.387029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.716 qpair failed and we were unable to recover it. 00:27:56.716 [2024-07-12 14:32:48.387108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.716 [2024-07-12 14:32:48.387120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.716 qpair failed and we were unable to recover it. 00:27:56.716 [2024-07-12 14:32:48.387200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.716 [2024-07-12 14:32:48.387213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.716 qpair failed and we were unable to recover it. 00:27:56.716 [2024-07-12 14:32:48.387295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.716 [2024-07-12 14:32:48.387307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.716 qpair failed and we were unable to recover it. 00:27:56.716 [2024-07-12 14:32:48.387444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.716 [2024-07-12 14:32:48.387457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.716 qpair failed and we were unable to recover it. 00:27:56.716 [2024-07-12 14:32:48.387522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.716 [2024-07-12 14:32:48.387534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.716 qpair failed and we were unable to recover it. 00:27:56.716 [2024-07-12 14:32:48.387672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.716 [2024-07-12 14:32:48.387687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.716 qpair failed and we were unable to recover it. 00:27:56.716 [2024-07-12 14:32:48.387767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.716 [2024-07-12 14:32:48.387778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.716 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.387855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.387866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.388067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.388079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.388215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.388227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.388290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.388301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.388386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.388398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.388481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.388493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.388628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.388640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.388704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.388716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.388797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.388809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.388890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.388902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.388975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.388988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.389055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.389067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.389273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.389285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.389360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.389371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.389514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.389527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.389598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.389609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.389747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.389759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.389830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.389841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.389911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.389922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.389990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.390003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.390080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.390091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.390162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.390174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.390245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.390257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.390325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.390336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.390426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.390439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.390585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.390596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.390734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.390746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.390825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.390836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.390967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.717 [2024-07-12 14:32:48.390980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.717 qpair failed and we were unable to recover it. 00:27:56.717 [2024-07-12 14:32:48.391052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.391063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.391256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.391268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.391400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.391412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.391483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.391494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.391591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.391602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.391767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.391778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.391913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.391925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.392080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.392092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.392191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.392202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.392287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.392300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.392458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.392470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.392543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.392555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.392634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.392645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.392716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.392728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.392865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.392876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.393053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.393064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.393195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.393206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.393282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.393293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.393388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.393399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.393560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.393572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.393640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.393652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.393787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.393799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.718 qpair failed and we were unable to recover it. 00:27:56.718 [2024-07-12 14:32:48.393866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.718 [2024-07-12 14:32:48.393877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.394015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.394027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.394182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.394194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.394351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.394363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.394440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.394452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.394606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.394618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.394704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.394714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.394890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.394918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.395002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.395013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.395094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.395105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.395305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.395317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.395469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.395481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.395579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.395591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.395683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.395695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.395902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.395913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.396006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.396017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.396103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.396115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.396188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.396199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.396340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.396352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.396492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.396504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.396563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.396575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.396733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.396745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.396906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.396918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.397090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.397102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.397186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.397198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.397432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.397445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.397595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.397606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.397678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.397691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.397797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.397809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.397901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.719 [2024-07-12 14:32:48.397913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.719 qpair failed and we were unable to recover it. 00:27:56.719 [2024-07-12 14:32:48.398005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.398016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.398155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.398167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.398328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.398339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.398489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.398502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.398584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.398596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.398659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.398670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.398765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.398777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.398910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.398922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.399085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.399097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.399193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.399205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.399272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.399283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.399425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.399437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.399497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.399508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.399593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.399604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.399750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.399763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.399895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.399906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.400009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.400021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.400089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.400100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.400237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.400249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.400347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.400358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.400571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.400584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.400681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.400694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.400925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.400936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.401068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.401079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.401152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.401163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.401234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.401247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.401391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.401403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.401552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.401564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.401653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.401664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.720 [2024-07-12 14:32:48.401801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.720 [2024-07-12 14:32:48.401813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.720 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.401963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.401974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.402046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.402058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.402146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.402158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.402295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.402308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.402405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.402416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.402496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.402509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.402595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.402607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.402759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.402773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.402977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.402988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.403056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.403068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.403166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.403178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.403328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.403340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.403419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.403430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.403577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.403589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.403835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.403847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.403916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.403928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.404008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.404019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.404104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.404115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.404249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.404261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.404349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.404361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.404503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.404515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.404704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.404716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.404802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.404814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.404960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.404972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.405173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.405185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.405253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.405265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.405347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.405359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.405447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.405459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.405546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.405557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.405628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.721 [2024-07-12 14:32:48.405640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.721 qpair failed and we were unable to recover it. 00:27:56.721 [2024-07-12 14:32:48.405816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.405827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.405985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.405996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.406060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.406070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.406163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.406175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.406321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.406343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.406430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.406446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.406535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.406549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.406642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.406658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.406733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.406748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.406824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.406840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.406927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.406941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.407092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.407104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.407176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.407188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.407275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.407287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.407444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.407456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.407545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.407557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.407694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.407707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.407912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.407927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.407998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.408010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.408087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.408100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.408198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.408209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.408414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.408426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.408573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.408585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.408658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.408671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.408771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.408783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.408937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.408950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.409020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.409032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.409111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.409123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.409206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.409218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.409289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.722 [2024-07-12 14:32:48.409302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.722 qpair failed and we were unable to recover it. 00:27:56.722 [2024-07-12 14:32:48.409512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.409525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.409662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.409674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.409815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.409827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.409906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.409919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.410012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.410024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.410105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.410118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.410279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.410291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.410459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.410471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.410551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.410564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.410768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.410780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.410855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.410866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.410931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.410943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.411057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.411069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.411155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.411166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.411253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.411266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.411422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.411437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.411521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.411533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.411670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.411681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.411761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.411773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.411925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.411937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.412104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.412115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.412182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.412194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.412262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.412272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.412342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.412353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.412493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.412506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.412596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.412608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.412711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.412723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.412885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.412899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.413052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.413064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.723 [2024-07-12 14:32:48.413200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.723 [2024-07-12 14:32:48.413212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.723 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.413344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.413355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.413446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.413458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.413525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.413536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.413612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.413623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.413828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.413840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.413982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.413994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.414144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.414155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.414220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.414230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.414329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.414340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.414491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.414514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.414595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.414606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.414689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.414700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.414769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.414779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.414871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.414883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.415022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.415034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.415125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.415136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.415207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.415218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.415403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.415432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.415493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.415504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.415607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.415619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.415767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.415779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.415873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.415885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.415956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.724 [2024-07-12 14:32:48.415968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.724 qpair failed and we were unable to recover it. 00:27:56.724 [2024-07-12 14:32:48.416064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.416075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.416151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.416162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.416228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.416240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.416313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.416324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.416395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.416432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.416507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.416519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.416662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.416673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.416763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.416775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.416873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.416884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.416978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.417001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.417086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.417098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.417169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.417179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.417252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.417263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.417358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.417369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.417517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.417533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.417669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.417680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.417750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.417761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.417862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.417874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.417950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.417961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.418165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.418176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.418329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.418341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.418485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.418497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.418585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.418597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.418753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.418766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.418844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.418856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.418932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.418944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.419077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.419089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.419153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.419163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.419309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.419320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.419431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.725 [2024-07-12 14:32:48.419444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.725 qpair failed and we were unable to recover it. 00:27:56.725 [2024-07-12 14:32:48.419635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.419647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.419793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.419804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.419937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.419949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.420026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.420036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.420174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.420186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.420255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.420266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.420437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.420449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.420531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.420543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.420639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.420651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.420787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.420799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.420856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.420866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.420943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.420962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.421053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.421069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.421147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.421162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.421308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.421323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.421402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.421417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.421650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.421666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.421807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.421822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.421899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.421915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.422018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.422034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.422136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.422151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.422257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.422273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.422347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.422361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.422463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.422479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.422560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.422579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.422818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.422833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.422913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.422928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.423079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.423094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.423187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.423202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.423277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.726 [2024-07-12 14:32:48.423292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.726 qpair failed and we were unable to recover it. 00:27:56.726 [2024-07-12 14:32:48.423384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.423400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.423501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.423516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.423591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.423606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.423699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.423714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.423861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.423876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.423953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.423968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.424079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.424095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.424259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.424274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.424366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.424388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.424483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.424499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.424589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.424604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.424754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.424768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.424915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.424927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.425076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.425088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.425165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.425177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.425316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.425328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.425400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.425424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.425519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.425530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.425614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.425626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.425775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.425787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.425874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.425886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.426057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.426087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.426170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.426186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.426270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.426286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.426372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.426395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.426485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.426500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.426587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.426602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.426702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.426717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.426797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.426812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.426904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.426920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.426996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.427010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.427166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.427178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.427256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.427268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.427349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.427360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.427450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.427463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.427538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.427550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.427623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.427633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.427698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.427709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.427786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.427797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.427861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.427871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.427954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.727 [2024-07-12 14:32:48.427966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.727 qpair failed and we were unable to recover it. 00:27:56.727 [2024-07-12 14:32:48.428118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.428130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.428222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.428234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.428323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.428334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.428418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.428431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.428533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.428546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.428618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.428629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.428707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.428720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.428793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.428804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.428871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.428882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.428953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.428965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.429041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.429052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.429122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.429133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.429203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.429215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.429386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.429399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.429483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.429496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.429565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.429577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.429643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.429655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.429725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.429737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.429898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.429909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.429993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.430006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.430077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.430090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.430231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.430242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.430326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.430338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.430405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.430415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.430486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.430498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.430576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.430588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.430667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.430678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.430753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.430765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.430897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.430909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.430978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.430989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.431060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.431072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.431138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.431150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.431296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.431308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.431393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.431404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.431488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.431498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.431571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.431581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.431656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.431665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.431745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.431754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.431901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.431911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.431976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.431985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.432272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.432282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.432366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.432375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.432520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.432530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.728 [2024-07-12 14:32:48.432624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.728 [2024-07-12 14:32:48.432634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.728 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.432814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.432822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.432907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.432916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.432982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.432991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.433122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.433133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.433210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.433220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.433287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.433298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.433366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.433382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.433467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.433477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.433562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.433573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.433717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.433729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.433809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.433819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.433891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.433901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.433971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.433981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.434035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.434045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.434110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.434120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.434210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.434221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.434305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.434318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.434414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.434426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.434550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.434561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.434626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.434637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.434719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.434730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.434815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.434825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.434895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.434905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.434987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.434998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.435056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.435067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.435203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.435214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.435284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.435294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.435366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.435383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.435480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.435491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.435636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.435648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.435730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.435741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.435884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.435896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.435997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.436010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.436084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.436095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.436173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.436184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.436255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.436267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.436333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.436344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.436485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.729 [2024-07-12 14:32:48.436498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.729 qpair failed and we were unable to recover it. 00:27:56.729 [2024-07-12 14:32:48.436574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.436586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.436661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.436673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.436761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.436773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.436835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.436845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.436919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.436931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.437025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.437037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.437183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.437205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.437289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.437300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.437405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.437417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.437486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.437498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.437578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.437590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.437671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.437684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.437751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.437764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.437902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.437914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.438013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.438025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.438116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.438127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.438289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.438302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.438369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.438386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.438464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.438478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.438639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.438650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.438721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.438731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.438802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.438814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.438892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.438903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.439063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.439075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.439144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.439156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.439320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.439331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.439411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.439423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.439565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.439577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.439655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.439666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.439740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.439751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.439819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.439831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.439898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.439909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.439983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.439995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.440150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.440162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.440233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.440244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.440323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.440334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.440391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.440402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.440538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.440549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.440695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.440706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.440915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.440926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.441071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.441083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.441150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.441162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.441296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.441308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.441393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.730 [2024-07-12 14:32:48.441405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.730 qpair failed and we were unable to recover it. 00:27:56.730 [2024-07-12 14:32:48.441530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.441542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.441683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.441694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.441772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.441784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.441970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.441981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.442046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.442058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.442129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.442141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.442279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.442291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.442370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.442397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.442475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.442487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.442554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.442567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.442636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.442646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.442788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.442800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.442872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.442884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.442968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.442980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.443062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.443075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.443152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.443164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.443307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.443318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.443370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.443386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.443483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.443495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.443581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.443593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.443680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.443692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.443849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.443861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.443936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.443947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.444087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.444098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.444166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.444178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.444264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.444275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.444432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.444444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.444560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.444572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.444645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.444656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.444727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.444739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.444829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.444841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.444917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.444928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.445071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.445082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.445162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.445174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.445250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.445262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.445317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.445327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.445394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.445406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.445472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.445483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.445615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.445627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.445731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.445742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.445876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.445887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.445960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.445972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.446105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.446116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.446183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.446194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.731 qpair failed and we were unable to recover it. 00:27:56.731 [2024-07-12 14:32:48.446265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.731 [2024-07-12 14:32:48.446276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.446354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.446366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.446448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.446460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.446537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.446549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.446624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.446636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.446718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.446730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.446803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.446814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.446950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.446961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.447028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.447039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.447179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.447190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.447335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.447348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.447428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.447440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.447522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.447533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.447616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.447627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.447703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.447715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.447801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.447813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.447896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.447907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.447974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.447986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.448050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.448062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.448217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.448228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.448323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.448335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.448409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.448421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.448501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.448512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.448649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.448660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.448738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.448749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.448815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.448826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.448904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.448916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.448974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.448985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.449049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.449060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.449134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.449145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.449213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.449225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.449301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.449312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.449383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.449395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.449473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.449485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.449565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.449577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.449745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.449756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.449906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.449918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.449990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.450002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.450073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.450083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.450201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.450213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.450304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.450316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.450408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.450420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.450487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.450499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.450724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.450736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.732 [2024-07-12 14:32:48.450850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.732 [2024-07-12 14:32:48.450861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.732 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.450937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.450949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.451026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.451037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.451175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.451187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.451329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.451340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.451494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.451506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.451570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.451584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.451650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.451661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.451730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.451741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.451833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.451844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.451983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.451995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.452061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.452072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.452141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.452152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.452291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.452302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.452460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.452472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.452541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.452552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.452634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.452645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.452715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.452726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.452808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.452819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.452902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.452913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.452999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.453011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.453085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.453096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.453163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.453175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.453263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.453274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.453363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.453374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.453466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.453478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.453644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.453655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.453721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.453731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.453810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.453821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.453899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.453911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.453995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.454007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.454227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.454239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.454318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.454329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.454400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.454412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.454502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.454513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.454583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.454594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.454660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.454672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.454874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.454886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.454954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.733 [2024-07-12 14:32:48.454966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.733 qpair failed and we were unable to recover it. 00:27:56.733 [2024-07-12 14:32:48.455041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.455052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.455135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.455147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.455203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.455214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.455282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.455293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.455363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.455374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.455443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.455454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.455599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.455610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.455685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.455698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.455793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.455804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.455877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.455889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.455977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.455988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.456056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.456068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.456137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.456149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.456298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.456310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.456398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.456410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.456491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.456502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.456635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.456647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.456721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.456732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.456802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.456813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.456898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.456910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.456994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.457006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.457215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.457227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.457318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.457329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.457414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.457425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.457564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.457576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.457654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.457665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.457749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.457760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.457898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.457909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.457988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.458000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.458071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.458082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.458237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.458249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.458327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.458338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.458423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.458434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.458506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.458517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.458657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.458669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.458753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.458765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.458846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.458858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.458997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.459008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.459079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.459091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.459178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.459189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.459257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.459269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.459334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.459346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.459435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.459447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.459578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.734 [2024-07-12 14:32:48.459589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.734 qpair failed and we were unable to recover it. 00:27:56.734 [2024-07-12 14:32:48.459668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.459680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.459817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.459828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.460034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.460046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.460116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.460136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.460214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.460226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.460363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.460375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.460571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.460582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.460767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.460778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.460862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.460873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.461014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.461026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.461192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.461203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.461295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.461307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.461372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.461395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.461501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.461512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.461584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.461595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.461679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.461691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.461768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.461778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.461867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.461878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.462001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.462012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.462151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.462162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.462231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.462242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.462322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.462333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.462536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.462548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.462618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.462629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.462713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.462725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.462804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.462815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.462918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.462929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.462995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.463006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.463089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.463100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.463192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.463203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.463285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.463296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.463362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.463373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.463459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.463471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.463539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.463550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.463616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.463627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.463763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.463775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.463844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.463855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.463993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.464005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.464168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.464179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.464243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.464254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.464319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.464331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.464464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.464476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.464547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.464559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.464626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.735 [2024-07-12 14:32:48.464639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.735 qpair failed and we were unable to recover it. 00:27:56.735 [2024-07-12 14:32:48.464722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.464733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.464796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.464807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.464924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.464935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.465021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.465032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.465127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.465139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.465219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.465230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.465370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.465386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.465499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.465510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.465575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.465586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.465654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.465665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.465811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.465823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.465909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.465920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.466007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.466019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.466135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.466146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.466221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.466233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.466327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.466338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.466424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.466437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.466571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.466582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.466650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.466661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.466732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.466744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.466880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.466892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.466959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.466970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.467040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.467052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.467138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.467149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.467224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.467235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.467302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.467313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.467388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.467400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.467561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.467573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.467648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.467659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.467735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.467746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.467824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.467835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.467905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.467916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.467986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.467997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.468060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.468072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.468146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.468158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.468292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.468304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.468386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.468398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.468486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.468497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.468566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.468578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.468646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.468659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.468737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.468748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.468826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.468838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.468983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.468994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.469058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.469070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.736 [2024-07-12 14:32:48.469140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.736 [2024-07-12 14:32:48.469152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.736 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.469251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.469262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.469325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.469337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.469402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.469414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.469548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.469559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.469636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.469648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.469719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.469730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.469815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.469826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.469899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.469910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.469979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.469990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.470060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.470072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.470162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.470173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.470261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.470272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.470414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.470425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.470493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.470505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.470564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.470574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.470726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.470737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.470805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.470817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.470908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.470919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.470983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.470994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.471062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.471072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.471138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.471149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.471243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.471278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.471390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.471410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.471607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.471622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.471690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.471705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.471780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.471795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.471888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.471904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.471987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.472002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.472092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.472107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.472187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.472202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.472278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.472291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.472382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.472394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.472481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.472492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.472575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.472586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.472660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.472674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.472752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.472764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.472971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.472983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.473058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.473069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.737 [2024-07-12 14:32:48.473223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.737 [2024-07-12 14:32:48.473234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.737 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.473301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.473312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.473447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.473459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.473596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.473608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.473711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.473723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.473860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.473872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.473944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.473955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.474040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.474052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.474136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.474147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.474285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.474297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.474374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.474390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.474529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.474541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.474688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.474699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.474771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.474783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.474920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.474932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.475091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.475102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.475200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.475211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.475282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.475293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.475362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.475374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.475448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.475459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.475590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.475601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.475690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.475701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.475789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.475801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.475947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.475958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.476096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.476108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.476176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.476187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.476253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.476265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.476349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.476360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.476448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.476460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.476562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.476573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.476644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.476656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.476729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.476741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.476878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.476890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.476967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.476979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.477143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.477154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.477223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.477235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.477301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.477313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.477454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.477466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.477547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.477559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.477668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.477679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.477748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.477759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.477837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.477849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.477932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.477943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.478009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.478020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.738 [2024-07-12 14:32:48.478085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.738 [2024-07-12 14:32:48.478097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.738 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.478231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.478242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.478410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.478422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.478593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.478604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.478664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.478676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.478853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.478864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.478954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.478966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.479101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.479112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.479204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.479215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.479373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.479396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.479465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.479476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.479581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.479592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.479683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.479694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.479763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.479774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.479922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.479933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.480066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.480077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.480225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.480237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.480311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.480322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.480464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.480477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.480551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.480564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.480650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.480662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.480734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.480745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.480833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.480844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.481008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.481020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.481177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.481187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.481355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.481367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.481456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.481467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.481536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.481547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.481632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.481644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.481710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.481722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.481856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.481867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.481930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.481942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.482010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.482021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.482164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.482175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.482263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.482274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.482355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.482366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.482456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.482468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.482534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.482545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.482640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.482652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.482803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.482814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.482886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.482897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.482984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.482995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.483062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.483073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.483136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.483148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.739 [2024-07-12 14:32:48.483233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.739 [2024-07-12 14:32:48.483244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.739 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.483327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.483338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.483421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.483433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.483517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.483529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.483663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.483674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.483767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.483778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.483855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.483866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.483932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.483943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.484010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.484022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.484087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.484099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.484175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.484186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.484259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.484271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.484356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.484367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.484440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.484452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.484542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.484553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.484618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.484631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.484765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.484777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.484872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.484883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.485030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.485041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.485135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.485147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.485217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.485229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.485303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.485314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.485408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.485428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.485503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.485514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.485643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.485654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.485794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.485806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.485887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.485898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.485969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.485981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.486037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.486047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.486131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.486142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.486208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.486219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.486304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.486315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.486389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.486401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.486477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.486488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.486565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.486576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.486657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.486668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.486749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.486760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.486825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.486836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.486911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.486923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.486999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.487011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.487097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.487108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.487182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.487193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.487277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.487288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.487356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.740 [2024-07-12 14:32:48.487367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.740 qpair failed and we were unable to recover it. 00:27:56.740 [2024-07-12 14:32:48.487450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.487462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.487539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.487550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.487651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.487662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.487744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.487756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.487817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.487828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.487904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.487916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.487993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.488004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.488076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.488088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.488157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.488169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.488236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.488247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.488394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.488406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.488468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.488481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.488547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.488558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.488624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.488636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.488714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.488726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.488894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.488905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.489051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.489063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.489135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.489147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.489287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.489299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.489436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.489449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.489544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.489556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.489691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.489703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.489844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.489855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.489922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.489932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.490012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.490023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.490109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.490121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.490200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.490211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.490281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.490292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.490356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.490367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.490535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.490547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.490678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.490690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.490771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.490783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.490899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.490910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.491044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.491055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.491151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.491162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.491292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.491304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.491399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.491412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.491526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.741 [2024-07-12 14:32:48.491537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.741 qpair failed and we were unable to recover it. 00:27:56.741 [2024-07-12 14:32:48.491635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.491647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.491733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.491745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.491814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.491826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.491923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.491935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.492022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.492034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.492104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.492116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.492257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.492268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.492346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.492358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.492456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.492468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.492539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.492550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.492617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.492628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.492699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.492710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.492796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.492807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.492886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.492900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.492972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.492984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.493056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.493068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.493202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.493213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.493298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.493310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.493387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.493399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.493536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.493547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.493687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.493699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.493778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.493790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.493872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.493884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.494064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.494075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.494147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.494158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.494263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.494275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.494354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.494365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.494554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.494566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.494719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.494730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.494808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.494820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.494977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.494988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.495179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.495190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.495260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.495271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.495340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.495352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.495421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.495433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.495632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.495644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.495730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.495742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.495830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.495841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.495979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.495991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.496070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.496081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.496154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.496166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.496300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.496311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.496391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.496402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.742 [2024-07-12 14:32:48.496497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.742 [2024-07-12 14:32:48.496509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.742 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.496653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.496665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.496867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.496878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.496954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.496966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.497034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.497045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.497117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.497129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.497203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.497215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.497294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.497306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.497455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.497466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.497543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.497554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.497640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.497654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.497729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.497740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.497810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.497822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.497883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.497894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.497971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.497982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.498050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.498061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.498148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.498160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.498241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.498254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.498352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.498363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.498438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.498450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.498529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.498540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.498677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.498687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.498755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.498768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.498865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.498877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.498950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.498962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.499099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.499111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.499239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.499250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.499394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.499407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.499468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.499478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.499539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.499551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.499685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.499697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.499846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.499859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.499928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.499940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.500008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.500020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.500179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.500191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.500327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.500338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.500418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.500430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.500516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.500527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.500606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.500617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.500741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.500752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.500893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.500904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.500983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.500994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.501092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.501104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.501255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.501266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.501403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.743 [2024-07-12 14:32:48.501415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.743 qpair failed and we were unable to recover it. 00:27:56.743 [2024-07-12 14:32:48.501491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.501503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.501628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.501640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.501780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.501791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.501954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.501965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.502115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.502126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.502281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.502296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.502375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.502390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.502527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.502538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.502608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.502619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.502699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.502711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.502864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.502876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.503033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.503045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.503146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.503156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.503243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.503254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.503338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.503350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.503428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.503440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.503525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.503536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.503683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.503694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.503843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.503854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.503993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.504005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.504187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.504198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.504343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.504354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.504508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.504520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.504615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.504627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.504704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.504715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.504869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.504880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.504959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.504970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.505046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.505057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.505139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.505150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.505224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.505235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.505319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.505330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.505446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.505457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.505534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.505545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.505677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.505688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.505839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.505851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.506007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.506019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.506088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.506099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.506250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.506261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.506408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.506420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.506557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.506568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.506652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.506663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.506819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.506831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.507013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.507027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.507191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.744 [2024-07-12 14:32:48.507202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.744 qpair failed and we were unable to recover it. 00:27:56.744 [2024-07-12 14:32:48.507304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.507315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.507481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.507495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.507675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.507686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.507783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.507794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.507931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.507943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.508024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.508035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.508177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.508190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.508261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.508272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.508422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.508434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.508526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.508537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.508625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.508637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.508713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.508724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.508883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.508895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.509029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.509041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.509116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.509128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.509270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.509281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.509470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.509482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.509573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.509585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.509662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.509673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.509763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.509776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.509949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.509960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.510035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.510046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.510128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.510140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.510222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.510234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.510314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.510326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.510462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.510474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.510613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.510625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.510703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.510715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.510802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.510813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.510948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.510959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.511106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.511117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.511177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.511188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.511364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.511376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.511460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.511472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.511554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.511566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.511723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.511735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.511878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.511890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.512032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.512044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.745 [2024-07-12 14:32:48.512118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.745 [2024-07-12 14:32:48.512130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.745 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.512216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.512228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.512298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.512309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.512397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.512412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.512484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.512496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.512640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.512652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.512800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.512812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.512951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.512963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.513028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.513039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.513108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.513119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.513253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.513265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.513405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.513417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.513553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.513565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.513635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.513646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.513795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.513807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.513946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.513957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.514094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.514105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.514192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.514204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.514362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.514376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.514519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.514531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.514626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.514638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.514712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.514723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.514803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.514815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.514910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.514921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.514997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.515009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.515078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.515089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.515161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.515172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.515255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.515266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.515336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.515348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.515417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.515428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.515510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.515521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.515600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.515612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.515709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.515721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.515861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.515873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.516015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.516027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.516120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.516131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.516267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.516279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.516338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.516349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.516428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.516439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.516592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.516604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.516690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.516702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.516771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.516783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.516859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.516870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.516947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.516961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.746 qpair failed and we were unable to recover it. 00:27:56.746 [2024-07-12 14:32:48.517124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.746 [2024-07-12 14:32:48.517135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.517285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.517296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.517384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.517395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.517465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.517476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.517701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.517712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.517854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.517866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.518004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.518015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.518086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.518098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.518204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.518216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.518294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.518305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.518421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.518434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.518511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.518523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.518664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.518677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.518761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.518772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.518929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.518941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.519006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.519018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.519108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.519120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.519201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.519213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.519289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.519301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.519372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.519389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.519462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.519474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.519540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.519553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.519628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.519640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.519787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.519798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.519866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.519877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.520091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.520103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.520192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.520218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.520301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.520316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.520482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.520499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.520651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.520667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.520744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.520759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.520850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.520866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.520942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.520958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.521046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.521061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.521273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.521289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.521432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.521445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.521526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.521538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.521626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.521655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.521724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.521736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.521894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.521906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.521979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.521992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.522154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.522166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.522246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.522258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.522344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.522356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.747 [2024-07-12 14:32:48.522529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.747 [2024-07-12 14:32:48.522542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.747 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.522616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.522627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.522697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.522709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.522774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.522786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.522856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.522868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.523023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.523035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.523106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.523118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.523208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.523220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.523312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.523324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.523466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.523478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.523551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.523563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.523649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.523661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.523753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.523765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.523828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.523839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.523983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.523995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.524079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.524091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.524161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.524174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.524255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.524267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.524357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.524369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.524514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.524526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.524616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.524628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.524797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.524809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.524891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.524905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.525049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.525062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.525199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.525211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.525291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.525304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.525388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.525401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.525540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.525552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.525688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.525700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.525784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.525797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.525871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.525883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.526110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.526122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.526200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.526214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.526284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.526296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.526373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.526404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.526475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.526487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.526584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.526597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.526671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.526684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.526749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.526763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.526845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.526857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.526996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.527008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.527170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.527182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.527253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.527265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.527420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.527432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.527515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.748 [2024-07-12 14:32:48.527527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.748 qpair failed and we were unable to recover it. 00:27:56.748 [2024-07-12 14:32:48.527613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.527624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.527713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.527725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.527794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.527805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.527888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.527899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.527970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.527981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.528051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.528062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.528131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.528143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.528283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.528295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.528364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.528376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.528449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.528461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.528544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.528555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.528623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.528634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.528722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.528733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.528821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.528833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.528913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.528924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.529060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.529071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.529140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.529151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.529217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.529229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.529297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.529309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.529387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.529399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.529486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.529497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.529637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.529648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.529709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.529719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.529799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.529810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.529915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.529926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.530081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.530093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.530247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.530258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.530336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.530348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.530426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.530437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.530509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.530520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.530609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.530621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.530695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.530707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.530850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.530861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.530931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.530943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.531083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.531095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.531188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.531200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.531267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.531279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.531352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.531364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-07-12 14:32:48.531439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.749 [2024-07-12 14:32:48.531451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.531532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.531543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.531629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.531641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.531801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.531813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.531880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.531890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.531959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.531971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.532080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.532092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.532173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.532184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.532258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.532271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.532358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.532369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.532473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.532486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.532550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.532561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.532637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.532650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.532718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.532728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.532806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.532818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.532888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.532901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.532998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.533011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.533081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.533094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.533159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.533171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.533244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.533257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.533430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.533443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.533512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.533524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.533620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.533632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.533705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.533717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.533857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.533870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.534050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.534063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.534152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.534165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.534320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.534332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.534485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.534497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.534633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.534645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.534720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.534731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.534805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.534816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.534883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.534894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.535046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.535059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.535137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.535149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.535221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.535233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.535321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.535333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.535423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.535434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.535517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.535530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.535672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.535683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.535767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.535779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.535866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.535879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.535953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.535965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.536052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.536064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.536145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.536158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-07-12 14:32:48.536224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.750 [2024-07-12 14:32:48.536237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.536311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.536323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.536393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.536406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.536478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.536489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.536575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.536587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.536665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.536678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.536745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.536757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.536897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.536910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.537004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.537016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.537232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.537245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.537314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.537326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.537473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.537485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.537642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.537655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.537725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.537738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.537899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.537913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.537986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.537998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.538076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.538089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.538252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.538265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.538425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.538438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.538600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.538612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.538747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.538759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.538832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.538845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.538981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.538993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.539066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.539078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.539163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.539176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.539406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.539418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.539505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.539517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.539669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.539682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.539821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.539833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.539913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.539926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.540067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.540079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.540215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.540227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.540297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.540309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.540391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.540404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.540482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.540493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.540585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.540597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.540756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.540769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.540917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.540930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.540999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.541011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.541143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.541155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.541303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.541315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.541456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.541470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.541649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.541661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.541834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.541846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.541952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.541976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.542202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.542223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.542310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.542322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.542400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.751 [2024-07-12 14:32:48.542428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-07-12 14:32:48.542585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.542597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.542826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.542839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.542916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.542928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.543024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.543036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.543190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.543201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.543357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.543370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.543451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.543466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.543540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.543553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.543695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.543707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.543788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.543801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.543952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.543964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.544057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.544068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.544202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.544213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.544437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.544450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.544674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.544686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.544759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.544771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.544910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.544922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.545027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.545038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.545173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.545185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.545365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.545381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.545454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.545466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.545604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.545616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.545724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.545736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.545826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.545838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.545920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.545932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.546009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.546020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.546176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.546188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.546255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.546267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.546438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.546451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.546604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.546616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.546683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.546695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.546795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.546808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.546946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.546959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.547025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.547036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.547103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.547113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.547203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.547215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.547295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.547306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.547450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.547463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.547665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.547678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.547815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.547827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.547965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.547978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.548051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.548062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.548133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.752 [2024-07-12 14:32:48.548145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.752 qpair failed and we were unable to recover it. 00:27:56.752 [2024-07-12 14:32:48.548283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.548296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.548394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.548405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.548556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.548568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.548639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.548653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.548857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.548869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.549097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.549108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.549222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.549233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.549372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.549385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.549512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.549524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.549596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.549607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.549693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.549704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.549800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.549812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.549907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.549918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.550015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.550027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.550167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.550179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.550333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.550345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.550521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.550533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.550671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.550682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.550824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.550837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.550909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.550920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.551007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.551019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.551160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.551171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.551259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.551271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.551428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.551441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.551598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.551610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.551700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.551712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.551799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.551811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.551957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.753 [2024-07-12 14:32:48.551969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.753 qpair failed and we were unable to recover it. 00:27:56.753 [2024-07-12 14:32:48.552044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.552055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.552185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.552196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.552265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.552276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.552362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.552373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.552456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.552468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.552620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.552632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.552731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.552742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.552817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.552829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.552920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.552932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.553085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.553097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.553239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.553251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.553322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.553334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.553406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.553418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.553485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.553495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.553670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.553682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.553773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.553786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.553919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.553931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.554017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.554029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.554196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.554209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.554296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.554308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.554457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.554469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.554602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.554614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.554761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.554773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.554837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.554847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.554932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.554944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.555033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.555044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.555176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.555188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.555267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.555279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.555366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.555383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.555517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.555529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.555625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.555637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.555722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.555732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.555808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.555820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.555976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.555988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.556059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.556069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.556151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.754 [2024-07-12 14:32:48.556162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.754 qpair failed and we were unable to recover it. 00:27:56.754 [2024-07-12 14:32:48.556297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.556309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.556406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.556418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.556508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.556521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.556591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.556602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.556740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.556752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.556976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.556987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.557125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.557137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.557274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.557286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.557370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.557388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.557471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.557483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.557634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.557646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.557845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.557857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.557942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.557953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.558027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.558039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.558124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.558136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.558225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.558237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.558375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.558394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.558540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.558552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.558703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.558714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.558798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.558812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.558893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.558905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.558985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.558997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.559071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.559083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.559240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.559252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.559396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.559408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.559485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.559496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.559588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.559600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.559668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.559680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.559748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.559759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.559930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.559942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.560172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.560184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.560288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.560300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.560451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.560463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.560601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.560613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.560679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.560690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.560843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.560855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.561032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.561044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.561134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.561145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.561214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.561226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.561294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.561306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.755 qpair failed and we were unable to recover it. 00:27:56.755 [2024-07-12 14:32:48.561484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.755 [2024-07-12 14:32:48.561497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.561585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.561597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.561659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.561669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.561823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.561836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.561921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.561933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.562008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.562020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.562089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.562101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.562206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.562219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.562293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.562304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.562457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.562469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.562545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.562556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.562631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.562643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.562735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.562746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.562836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.562848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.562925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.562936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.563007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.563019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.563090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.563101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.563200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.563212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.563283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.563295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.563498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.563512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.563649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.563661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.563807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.563819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.563911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.563923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.564006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.564017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.564090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.564101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.564245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.564256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.564331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.564341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.564420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.564432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.564520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.564532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.564599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.564609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.564817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.564829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.564980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.564991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.565148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.565160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.565364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.565376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.565480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.565492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.565640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.565652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.756 [2024-07-12 14:32:48.565730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.756 [2024-07-12 14:32:48.565742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.756 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.565823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.565835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.565916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.565928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.566019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.566031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.566105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.566117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.566253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.566265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.566405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.566417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.566502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.566513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.566621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.566632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.566720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.566732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.566934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.566968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.567055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.567072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.567163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.567179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.567271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.567287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.567367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.567389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.567473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.567489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.567630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.567645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.567735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.567750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.567846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.567862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.568023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.568036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.568217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.568248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.568364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.568406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.568585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.568615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.568736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.568767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.569045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.569075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.569322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.569352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.569518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.569530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.569652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.569683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.569861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.569891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.570023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.570054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.570230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.570261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.570438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.570470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.570648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.570679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.757 [2024-07-12 14:32:48.570871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.757 [2024-07-12 14:32:48.570883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.757 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.570975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.570987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.571174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.571205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.571418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.571450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.571634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.571665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.571853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.571884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.572132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.572163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.572343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.572374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.572566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.572597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.572759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.572771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.572917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.572948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.573143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.573173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.573300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.573331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.573526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.573538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.573717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.573747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.573950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.573981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.574177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.574208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.574344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.574390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.574585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.574598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.574677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.574689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.574850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.574862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.574972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.575002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.575176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.575207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.575348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.575417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.575541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.575572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.575697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.575728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.575938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.575968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.576142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.576154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.576320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.576351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.576634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.576703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.576917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.576934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.577066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.577083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.577182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.577197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.577372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.577426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.577614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.577647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.577754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.577785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.577903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.577918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.578090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.578122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.578261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.578292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.578499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.578533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.578665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.758 [2024-07-12 14:32:48.578681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.758 qpair failed and we were unable to recover it. 00:27:56.758 [2024-07-12 14:32:48.578767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.578782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.578898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.578930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.579055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.579086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.579274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.579306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.579500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.579516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.579612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.579627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.579733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.579764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.579939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.579970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.580104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.580135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.580256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.580288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.580485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.580517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.580698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.580729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.580931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.580946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.581111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.581142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.581323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.581354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.581509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.581541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.581729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.581766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.582018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.582034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.582127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.582142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.582364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.582388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.582458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.582472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.582639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.582670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.582791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.582823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.582951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.582982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.583173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.583204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.583338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.583369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.583588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.583622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.583817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.583832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.583905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.583919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.584009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.584024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.584117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.584132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.584278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.584296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.584411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.584430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.584528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.584544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.584630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.584644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.584733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.584749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.584850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.584863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.584932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.584943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.585028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.585039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.585177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.585189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.759 [2024-07-12 14:32:48.585265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.759 [2024-07-12 14:32:48.585277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.759 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.585441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.585453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.585525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.585536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.585636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.585668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.585765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.585781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.585931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.585947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.586035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.586049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.586201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.586216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.586357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.586373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.586482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.586498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.586575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.586589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.586663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.586678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.586765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.586779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.586873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.586888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.587031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.587047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.587126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.587139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.587275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.587288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.587370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.587387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.587521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.587533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.587603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.587613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.587696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.587706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.587771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.587781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.587929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.587941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.588099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.588111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.588181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.588192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.588332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.588344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.588412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.588423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.588503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.588513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.588583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.588593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.588661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.588672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.588756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.588767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.588833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.588844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.588926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.588937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.589089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.589101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.589185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.589195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.589338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.589350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.589441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.589452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.589544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.589554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.589643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.589654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.589724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.589734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.589884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.589895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-07-12 14:32:48.589973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.760 [2024-07-12 14:32:48.589983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.590048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.590059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.590131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.590142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.590207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.590217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.590299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.590309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.590394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.590406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.590544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.590555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.590630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.590640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.590775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.590786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.590950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.590962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.591037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.591048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.591250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.591262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.591345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.591356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.591451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.591463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.591612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.591624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.591708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.591720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.591802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.591812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.591886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.591896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.591965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.591975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.592052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.592062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.592135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.592146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.592223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.592233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.592309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.592319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.592396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.592408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.592559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.592569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.592727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.592739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.592810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.592820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.592953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.592965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.593102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.593114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.593193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.593204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.593289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.593300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.593384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.593395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.593464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.593474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.593570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.593581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.593649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.593659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.593726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.593737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.593827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.593837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.593911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.593921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.593999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.594009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.594085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.594095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.594297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.594308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-07-12 14:32:48.594384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.761 [2024-07-12 14:32:48.594395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.594536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.594548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.594682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.594694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.594786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.594797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.594871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.594882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.595019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.595032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.595114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.595125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.595288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.595300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.595390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.595403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.595487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.595499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.595570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.595582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.595665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.595677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.595814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.595826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.595925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.595937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.596126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.596140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.596280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.596291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.596358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.596368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.596511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.596524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.596605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.596616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.596752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.596764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.596944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.596956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.597062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.597074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.597209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.597221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.597291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.597301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.597389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.597401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.597467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.597478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.597565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.597575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.597640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.597650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.597727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.597739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.597886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.597898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.598042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.598055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.598192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.598205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.598279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.598291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.598364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.598376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.598449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.598460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-07-12 14:32:48.598595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.762 [2024-07-12 14:32:48.598607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.598691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.598702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.598784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.598796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.598874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.598886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.598969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.598981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.599145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.599157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.599313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.599332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.599413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.599430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.599521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.599536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.599620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.599636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.599717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.599732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.599872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.599887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.600031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.600046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.600209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.600225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.600394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.600410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.600515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.600531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.600608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.600623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.600766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.600781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.600858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.600873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.601017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.601031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.601168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.601180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.601326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.601337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.601470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.601482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.601697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.601709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.601778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.601790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.601871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.601882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.601978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.601990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.602058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.602068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.602152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.602163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.602298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.602310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.602465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.602477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.602546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.602557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.602656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.602668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.602805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.602818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.602965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.602977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.603059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.603071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.603259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.603272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.603347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.603358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.603567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.603580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.603798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.603810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-07-12 14:32:48.603874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.763 [2024-07-12 14:32:48.603885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.604018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.604030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.604100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.604111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.604199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.604211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.604366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.604384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.604519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.604531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.604645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.604679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.604774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.604791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.604893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.604909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.605066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.605088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.605242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.605257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.605332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.605347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.605440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.605456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.605534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.605549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.605631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.605646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.605782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.605798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.606009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.606025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.606110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.606125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.606209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.606225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.606311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.606326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.606421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.606437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.606630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.606645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.606833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.606849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.607004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.607020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.607099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.607114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.607269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.607285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.607362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.607383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.607464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.607480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.607667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.607682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.607897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.607912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.608003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.608018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.608101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.608118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.608261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.608276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.608359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.608382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.608600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.608616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.608704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.608719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.608861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.608876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.609034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.609049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.609151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.609166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.609417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.609434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.609664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.764 [2024-07-12 14:32:48.609679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-07-12 14:32:48.609854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.609869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.610049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.610064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.610206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.610222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.610315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.610330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.610471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.610486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.610635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.610650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.610731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.610747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.610837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.610852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.610998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.611013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.611095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.611110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.611201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.611215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.611353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.611365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.611462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.611473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.611559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.611570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.611640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.611650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.611795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.611806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.611947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.611959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.612035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.612047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.612185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.612197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.612286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.612300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.612358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.612369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.612544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.612556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.612687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.612699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.612792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.612804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.612886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.612898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.612985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.612996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.613086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.613099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.613186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.613198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.613293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.613305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.613396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.613409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.613497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.613509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.613648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.613661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.613729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.613742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.613816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.613828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.613909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.613922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.614011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.614024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.614112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.614124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.614209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.614221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.614301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.614313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.614522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.614541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.614677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.765 [2024-07-12 14:32:48.614689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.765 qpair failed and we were unable to recover it. 00:27:56.765 [2024-07-12 14:32:48.614774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.614786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.614858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.614870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.615093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.615105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.615192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.615204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.615278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.615290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.615365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.615384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.615523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.615535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.615687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.615699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.615789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.615801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.615885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.615896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.615967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.615979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.616116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.616128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.616335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.616347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.616412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.616423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.616514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.616525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.616666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.616679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.616818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.616830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.616926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.616938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.617069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.617083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.617309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.617322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.617390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.617401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.617470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.617483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.617580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.617592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.617732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.617745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.617894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.617906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.617984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.617996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.618149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.618161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.618305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.618317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.618404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.618416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.618519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.618531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.618669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.618681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.618752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.618764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.618866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.618879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.619046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.619058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.619208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.766 [2024-07-12 14:32:48.619221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.766 qpair failed and we were unable to recover it. 00:27:56.766 [2024-07-12 14:32:48.619374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.619390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.619473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.619486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.619562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.619575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.619666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.619678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.619815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.619827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.619908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.619921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.620069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.620082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.620156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.620168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.620256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.620268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.620347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.620359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.620464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.620476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.620609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.620621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.620769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.620781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.620874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.620886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.621067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.621079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.621151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.621164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.621232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.621243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.621476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.621489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.621581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.621593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.621665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.621678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.621811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.621823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.621921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.621933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.622145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.622157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.622292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.622307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.622474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.622487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.622556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.622568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.622794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.622807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.622971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.622994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.623069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.623080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.623167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.623179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.623318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.623330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.623466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.623478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.623636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.623648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.623734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.623746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.623828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.623840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.623917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.623928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.624028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.624040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.624193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.624204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.624283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.624294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.624445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.624457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.624642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.767 [2024-07-12 14:32:48.624656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.767 qpair failed and we were unable to recover it. 00:27:56.767 [2024-07-12 14:32:48.624758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.624770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.624931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.624943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.625013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.625024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.625238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.625250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.625334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.625347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.625431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.625443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.625585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.625598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.625695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.625707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.625863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.625875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.626089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.626101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.626244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.626256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.626429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.626442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.626512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.626524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.626590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.626601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.626699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.626712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.626779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.626791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.626926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.626939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.627047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.627058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.627141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.627152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.627219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.627229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.627390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.627402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.627507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.627519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.627671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.627686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.627767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.627780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.627852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.627865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.627946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.627958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.628095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.628107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.628176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.628188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.628340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.628351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.628501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.628514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.628578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.628590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.628737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.628749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.628891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.628904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.628987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.629010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.629165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.629177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.629358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.629369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.629603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.629616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.629833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.629845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.630069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.630080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.630322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.630333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.768 [2024-07-12 14:32:48.630472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.768 [2024-07-12 14:32:48.630484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.768 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.630648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.630662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.630734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.630747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.630899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.630911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.630991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.631013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.631093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.631106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.631192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.631203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.631356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.631368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.631471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.631484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.631567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.631579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.631758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.631770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.631879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.631892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.631961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.631974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.632064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.632076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.632144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.632155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.632343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.632355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.632428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.632442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.632525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.632537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.632615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.632628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.632770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.632783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.632882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.632894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.633035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.633047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.633202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.633215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.633297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.633309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.633398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.633411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.633484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.633497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.633635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.633647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.633864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.633877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.633947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.633958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.634098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.634110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.634195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.634207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.634358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.634370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.634465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.634477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.634562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.634574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.634645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.634657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.634820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.634832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.634972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.634985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.635055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.635068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.635166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.635179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.635267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.635280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.635438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.635450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.769 qpair failed and we were unable to recover it. 00:27:56.769 [2024-07-12 14:32:48.635526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.769 [2024-07-12 14:32:48.635539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.635674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.635686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.635830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.635843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.635931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.635943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.636025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.636037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.636120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.636133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.636219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.636231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.636381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.636394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.636473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.636486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.636556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.636568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.636645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.636658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.636797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.636810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.636970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.636982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.637086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.637099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.637313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.637325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.637410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.637423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.637513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.637526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.637613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.637625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.637769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.637781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.637870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.637882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.638022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.638034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.638107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.638122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.638261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.638273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.638347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.638358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.638451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.638465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.638529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.638540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.638743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.638756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.638823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.638834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.638921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.638934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.639008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.639021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.639160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.639172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.639374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.639392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.639459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.639470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.639683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.639696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.639850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.639863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.770 qpair failed and we were unable to recover it. 00:27:56.770 [2024-07-12 14:32:48.639948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.770 [2024-07-12 14:32:48.639961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.640164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.640177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.640256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.640268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.640520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.640532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.640746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.640758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.640838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.640851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.640942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.640955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.641086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.641098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.641248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.641260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.641355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.641367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.641491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.641526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.641614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.641631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.641730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.641745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.641928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.641942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.642027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.642040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.642175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.642187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.642326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.642338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.642568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.642581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.642663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.642674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.642898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.642911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.643062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.643073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.643147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.643158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.643243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.643255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.643426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.643439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.643585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.643597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.643693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.643706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.643855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.643869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.644038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.644050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.644140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.644151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.644243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.644255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.644344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.644355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.644443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.644457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.644593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.644605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.644697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.644710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.644867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.644879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.645033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.645045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.645112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.645123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.645271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.645283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.645356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.645367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.645508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.645539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.645790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.771 [2024-07-12 14:32:48.645808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.771 qpair failed and we were unable to recover it. 00:27:56.771 [2024-07-12 14:32:48.646025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.646040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.646129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.646145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.646234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.646249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.646328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.646344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.646609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.646625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.646773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.646788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.646902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.646918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.647074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.647089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.647188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.647204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.647307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.647323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.647432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.647448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.647619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.647631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.647728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.647740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.647826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.647838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.648052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.648063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.648141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.648151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.648312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.648324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.648469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.648482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.648569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.648581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.648667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.648678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.648815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.648827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.649049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.649061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.649156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.649167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.649262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.649273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.649355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.649366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.649518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.649532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.649615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.649627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.649801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.649813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.649910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.649922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.650087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.650098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.650186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.650198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.650333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.650345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.650478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.650490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.650563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.650575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.650671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.650683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.650775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.650787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.650924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.650936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.651070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.651081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.651231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.651242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.651394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.651407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.651502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.772 [2024-07-12 14:32:48.651515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.772 qpair failed and we were unable to recover it. 00:27:56.772 [2024-07-12 14:32:48.651610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.651622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.651827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.651839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.652046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.652058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.652218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.652229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.652307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.652319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.652404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.652432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.652570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.652581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.652653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.652665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.652754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.652766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.652937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.652949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.653085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.653096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.653212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.653231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.653395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.653434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.653540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.653557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.653646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.653661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.653733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.653747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.653842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.653862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.653977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.653990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.654173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.654184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.654329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.654341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.654428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.654442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.654511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.654523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.654725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.654737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.654880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.654893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.655032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.655046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.655133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.655145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.655350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.655363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.655508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.655521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.655658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.655670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.655749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.655762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.655857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.655870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.655966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.655979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.656079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.656091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.656224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.656236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.656304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.656316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.656448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.656460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.656542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.656554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.656689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.656701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.656789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.656801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.656904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.656916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.773 [2024-07-12 14:32:48.657003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.773 [2024-07-12 14:32:48.657016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.773 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.657099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.657111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.657213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.657226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.657368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.657385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.657485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.657498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.657582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.657594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.657668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.657679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.657786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.657798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.657935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.657947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.658027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.658039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.658190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.658203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.658290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.658308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.658398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.658414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.658502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.658517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.658660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.658675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.658750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.658764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.658839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.658854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.659100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.659115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.659231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.659247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.659392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.659408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.659618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.659634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.659795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.659811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.659975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.659991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.660148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.660164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.660249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.660268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.660356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.660372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.660541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.660555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.660648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.660660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.660792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.660805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.660954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.660966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.661050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.661063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.661135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.661147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.661296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.661308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.661460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.661472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.661549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.661561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.661705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.661718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.661867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.661879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.774 qpair failed and we were unable to recover it. 00:27:56.774 [2024-07-12 14:32:48.661975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.774 [2024-07-12 14:32:48.661987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.662137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.662150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.662328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.662340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.662480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.662493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.662648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.662660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.662804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.662816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.662887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.662899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.662987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.662999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.663188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.663200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.663281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.663294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.663365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.663383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.663529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.663542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.663749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.663761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.663976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.663988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.664065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.664076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.664232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.664243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.664439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.664451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.664618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.664631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.664702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.664715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.664875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.664887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.665045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.665057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.665134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.665145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.665299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.665310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.665392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.665404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.665523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.665535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.665604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.665615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.665678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.665690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.665873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.665888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.665964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.665976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.666137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.666148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.666292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.666303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.666450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.666463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.666609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.666622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.666701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.666714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.666847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.666859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.666948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.666960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.667126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.667138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.667206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.775 [2024-07-12 14:32:48.667217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.775 qpair failed and we were unable to recover it. 00:27:56.775 [2024-07-12 14:32:48.667316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.667327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.667396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.667423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.667507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.667520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.667680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.667692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.667834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.667847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.668011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.668023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.668250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.668261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.668442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.668455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.668559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.668571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.668651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.668664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.668872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.668885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.668968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.668980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.669216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.669228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.669404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.669433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.669588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.669600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.669741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.669753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.669842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.669858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.669943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.669955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.670106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.670118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.670263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.670275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.670437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.670450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.670585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.670596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.670830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.670842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.670983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.670994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.671167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.671178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.671324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.671336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.671479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.671492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.671573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.671586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.671755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.671767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.671826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.671837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.671978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.672002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.672152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.672164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.672244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.672255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.672422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.672434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.672594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.672607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.672670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.672681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.672765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.672778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.673013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.673025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.673163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.673174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.673264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.673277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.776 [2024-07-12 14:32:48.673426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.776 [2024-07-12 14:32:48.673439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.776 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.673532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.673544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.673640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.673652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.673862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.673874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.674026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.674037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.674124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.674136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.674229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.674240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.674372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.674394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.674551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.674564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.674712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.674724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.674810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.674823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.674899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.674911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.675117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.675129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.675275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.675287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.675522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.675535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.675614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.675626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.675774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.675789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.675877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.675890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.676047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.676059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.676218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.676229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.676366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.676382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.676448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.676460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.676611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.676623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.676693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.676705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.676848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.676860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.676934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.676945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.677082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.677093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.677266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.677278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.677430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.677444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.677579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.677592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.677683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.677696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.677793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.677805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.677888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.677900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.677975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.677987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.678136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.678148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.678289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.678301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.678392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.678405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.678503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.678516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.678652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.678664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.678750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.777 [2024-07-12 14:32:48.678763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.777 qpair failed and we were unable to recover it. 00:27:56.777 [2024-07-12 14:32:48.678915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.678927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.679037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.679048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.679114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.679124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.679353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.679365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.679444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.679456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.679594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.679606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.679677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.679689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.679896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.679908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.679999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.680011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.680077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.680087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.680175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.680186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.680288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.680299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.680388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.680401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.680484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.680495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.680577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.680588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.680733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.680746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.680826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.680839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.681004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.681027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.681173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.681184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.681255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.681266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.681408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.681437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.681515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.681528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.681626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.681638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.681869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.681882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.682030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.682042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.682121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.682131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:56.778 [2024-07-12 14:32:48.682302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.778 [2024-07-12 14:32:48.682314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:56.778 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.682404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.682433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.682567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.682580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.682733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.682746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.682837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.682850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.682920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.682931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.683032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.683044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.683136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.683147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.683219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.683229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.683385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.683397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.683495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.683508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.683583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.683594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.683796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.683809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.683897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.683910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.684053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.684066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.684151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.684164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.684370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.684387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.684536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.684548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.684756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.684768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.684997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.685009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.685090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.685103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.685192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.685204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.685272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.685283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.685444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.685457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.685683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.685695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.685832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.685844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.685944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.685957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.686034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.686045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.064 qpair failed and we were unable to recover it. 00:27:57.064 [2024-07-12 14:32:48.686185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.064 [2024-07-12 14:32:48.686197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.686350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.686362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.686543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.686558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.686645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.686657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.686823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.686835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.686939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.686952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.687091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.687103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.687248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.687260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.687408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.687421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.687642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.687654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.687793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.687805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.687883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.687896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.688053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.688065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.688271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.688283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.688386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.688399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.688627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.688640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.688810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.688822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.688921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.688932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.689091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.689103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.689246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.689259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.689347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.689360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.689462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.689475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.689566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.689579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.689717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.689730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.689824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.689836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.689918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.689930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.690137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.690149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.690357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.690369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.690442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.690454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.690597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.690609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.690788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.690800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.691048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.691061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.691199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.691212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.691370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.691387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.691476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.691488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.691572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.691584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.691663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.691676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.691792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.691804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.691944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.065 [2024-07-12 14:32:48.691956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.065 qpair failed and we were unable to recover it. 00:27:57.065 [2024-07-12 14:32:48.692105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.692117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.692195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.692207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.692276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.692287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.692369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.692389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.692531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.692544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.692722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.692734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.692811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.692822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.692900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.692911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.692992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.693003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.693082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.693093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.693257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.693270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.693425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.693437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.693681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.693694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.693851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.693864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.694047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.694059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.694141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.694153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.694220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.694231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.694383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.694395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.694489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.694500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.694675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.694688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.694825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.694837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.694911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.694922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.695057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.695069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.695163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.695174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.695329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.695342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.695490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.695503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.695604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.695617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.695754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.695766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.695838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.695849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.696005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.696016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.696107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.696119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.066 [2024-07-12 14:32:48.696192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.066 [2024-07-12 14:32:48.696202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.066 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.696269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.696279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.696429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.696442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.696533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.696545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.696703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.696716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.696803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.696815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.696969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.696981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.697073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.697085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.697339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.697351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.697449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.697461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.697680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.697692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.697868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.697880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.697963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.697979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.698183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.698195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.698335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.698347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.698485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.698497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.698581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.698594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.698752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.698765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.698906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.698919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.699010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.699021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.699124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.699136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.699279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.699290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.699369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.699383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.699453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.699465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.699574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.699585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.699750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.699762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.699908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.699920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.700068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.700080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.700149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.700159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.700293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.700305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.700482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.700494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.700667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.700679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.700830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.700842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.701008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.701020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.701102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.701113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.701246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.701257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.701398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.701426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.701498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.701509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.701580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.701593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.701660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.701670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.067 [2024-07-12 14:32:48.701834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.067 [2024-07-12 14:32:48.701847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.067 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.701925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.701936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.702073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.702085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.702257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.702269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.702334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.702344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.702498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.702511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.702611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.702623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.702830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.702843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.702980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.703003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.703155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.703167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.703324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.703336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.703434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.703457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.703622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.703636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.703713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.703742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.703948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.703960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.704042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.704053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.704233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.704245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.704382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.704395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.704474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.704485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.704673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.704685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.704770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.704782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.704885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.704897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.705045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.705056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.705198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.705210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.705346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.705358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.705495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.705507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.705657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.705669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.705749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.705760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.705841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.705853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.705991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.706003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.706143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.706155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.068 [2024-07-12 14:32:48.706314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.068 [2024-07-12 14:32:48.706326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.068 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.706476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.706488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.706662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.706674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.706768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.706780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.706870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.706882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.707014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.707026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.707116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.707127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.707215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.707227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.707307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.707318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.707409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.707422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.707593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.707605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.707758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.707769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.707852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.707863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.708025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.708037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.708184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.708196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.708348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.708361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.708453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.708466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.708540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.708551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.708625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.708638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.708855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.708868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.709021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.709034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.709127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.709142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.709214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.709225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.709432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.709444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.709523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.709537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.709632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.709644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.709724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.709737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.709886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.709898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.709963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.709973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.710107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.710121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.710276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.710288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.710386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.710398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.710495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.710507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.710585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.710597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.069 [2024-07-12 14:32:48.710666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.069 [2024-07-12 14:32:48.710677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.069 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.710757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.710768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.710835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.710845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.710930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.710942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.711079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.711091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.711177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.711189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.711394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.711406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.711515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.711526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.711615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.711626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.711773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.711785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.711864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.711876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.711941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.711951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.712041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.712051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.712183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.712195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.712281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.712293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.712380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.712392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.712479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.712491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.712638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.712650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.712718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.712728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.712875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.712887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.712971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.712982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.713055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.713065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.713139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.713149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.713246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.713256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.713335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.713346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.713425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.713436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.713539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.713551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.713627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.713640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.713726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.713736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.713800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.713811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.713969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.070 [2024-07-12 14:32:48.713980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.070 qpair failed and we were unable to recover it. 00:27:57.070 [2024-07-12 14:32:48.714061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.714073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.714215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.714227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.714316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.714327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.714482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.714493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.714630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.714641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.714875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.714887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.714962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.714972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.715196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.715207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.715360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.715371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.715579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.715591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.715756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.715768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.715836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.715846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.715993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.716004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.716142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.716154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.716233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.716245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.716335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.716347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.716501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.716514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.716658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.716670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.716758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.716769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.716860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.716872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.716953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.716965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.717167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.717179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.717315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.717326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.717400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.717411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.717480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.717491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.717587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.717598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.717684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.717695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.717833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.717845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.717911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.717921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.718021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.718033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.718174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.718195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.718354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.718366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.718512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.718523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.718586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.718596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.718677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.718689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.718767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.718778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.718919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.718933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.719025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.719036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.719113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.719124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.719194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.719205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.719360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.071 [2024-07-12 14:32:48.719372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.071 qpair failed and we were unable to recover it. 00:27:57.071 [2024-07-12 14:32:48.719446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.719458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.719595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.719607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.719681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.719692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.719777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.719789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.719873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.719885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.719955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.719966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.720127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.720139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.720287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.720298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.720434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.720446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.720649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.720660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.720726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.720736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.720886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.720898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.721036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.721048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.721199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.721210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.721283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.721294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.721352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.721362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.721518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.721530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.721611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.721622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.721802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.721813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.721893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.721905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.721993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.722004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.722159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.722171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.722308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.722319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.722460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.722473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.722652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.722663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.722749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.722761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.722827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.722837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.722921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.722932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.723065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.723076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.723233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.072 [2024-07-12 14:32:48.723245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.072 qpair failed and we were unable to recover it. 00:27:57.072 [2024-07-12 14:32:48.723389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.723400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.723463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.723475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.723554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.723564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.723653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.723665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.723759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.723770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.723842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.723857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.723954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.723965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.724105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.724118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.724272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.724283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.724435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.724446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.724540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.724551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.724703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.724714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.724785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.724797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.724952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.724963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.725046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.725058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.725125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.725136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.725347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.725359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.725457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.725469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.725612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.725624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.725784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.725795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.725936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.725948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.726124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.726135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.726215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.726228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.726389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.726401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.726482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.726494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.726579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.726591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.726748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.726759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.726843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.726854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.727062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.727073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.727209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.727220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.727374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.727389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.727458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.727469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.727609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.727620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.727751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.727763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.073 [2024-07-12 14:32:48.727852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.073 [2024-07-12 14:32:48.727863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.073 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.727946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.727957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.728103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.728114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.728209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.728221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.728370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.728384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.728574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.728587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.728740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.728751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.728906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.728918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.729174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.729185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.729264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.729276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.729446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.729458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.729632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.729646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.729806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.729817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.729970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.729982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.730137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.730148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.730306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.730318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.730474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.730485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.730597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.730608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.730689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.730701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.730929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.730940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.731021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.731033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.731262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.731274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.731374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.731390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.731539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.731551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.731640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.731652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.731798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.731810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.731978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.731990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.732131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.732143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.732296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.732307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.732412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.732424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.732578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.732590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.732682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.732694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.732761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.732772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.732925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.732937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.733091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.733102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.733171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.733181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.733398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.733410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.733546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.733557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.733766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.733777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.074 [2024-07-12 14:32:48.733918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.074 [2024-07-12 14:32:48.733930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.074 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.733997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.734007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.734180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.734192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.734388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.734399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.734546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.734558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.734630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.734642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.734728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.734739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.734812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.734822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.734959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.734970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.735059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.735071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.735144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.735156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.735302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.735313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.735374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.735391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.735624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.735636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.735704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.735714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.735877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.735888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.735969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.735981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.736116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.736128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.736194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.736204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.736360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.736371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.736535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.736547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.736763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.736775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.736863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.736875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.736953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.736964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.737044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.737054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.737130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.737141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.737284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.737296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.737461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.737473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.737605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.737617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.737823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.737834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.737968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.737980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.738049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.738060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.738192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.738203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.738282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.738293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.738426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.738438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.738517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.738529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.738597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.738608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.738785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.738797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.738882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.738893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.739010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.739045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.739153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.075 [2024-07-12 14:32:48.739171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.075 qpair failed and we were unable to recover it. 00:27:57.075 [2024-07-12 14:32:48.739334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.739349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.739496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.739509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.739589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.739600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.739687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.739698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.739860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.739872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.739941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.739951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.740102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.740114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.740318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.740329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.740395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.740405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.740559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.740571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.740644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.740655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.740788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.740800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.740890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.740902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.741041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.741052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.741222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.741233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.741383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.741395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.741535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.741546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.741642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.741654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.741792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.741803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.741941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.741952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.742084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.742095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.742176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.742186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.742349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.742361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.742441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.742452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.742546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.742570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.742642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.742652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.742800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.742812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.742906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.742919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.743069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.743080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.743281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.743293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.743366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.743389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.743524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.743536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.743671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.743683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.743888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.743899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.744050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.744061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.744303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.744314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.744534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.744546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.744773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.744784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.744991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.745006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.745101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.076 [2024-07-12 14:32:48.745112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.076 qpair failed and we were unable to recover it. 00:27:57.076 [2024-07-12 14:32:48.745333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.745345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.745483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.745495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.745682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.745693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.745844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.745855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.746004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.746016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.746161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.746173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.746245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.746255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.746329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.746339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.746419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.746430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.746512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.746523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.746621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.746633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.746713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.746723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.746876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.746888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.746959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.746969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.747049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.747060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.747143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.747155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.747251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.747263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.747402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.747415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.747492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.747505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.747588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.747599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.747683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.747694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.747840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.747851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.747984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.747995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.748143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.748155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.748224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.748234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.748319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.748331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.748469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.748481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.748549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.748559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.748641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.748653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.748859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.748870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.749010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.749021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.749086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.749097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.749172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.749182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.749387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.749399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.749572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.749584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.749669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.749681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.749753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.749763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.749848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.749859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.750004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.750017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.750099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.750110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.077 [2024-07-12 14:32:48.750204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.077 [2024-07-12 14:32:48.750216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.077 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.750284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.750295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.750437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.750449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.750517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.750527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.750728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.750739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.750806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.750816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.750878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.750888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.751019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.751030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.751168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.751179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.751339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.751351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.751490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.751502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.751639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.751651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.751731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.751742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.751992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.752003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.752143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.752155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.752232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.752242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.752326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.752337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.752489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.752501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.752635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.752647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.752806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.752818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.752905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.752916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.752975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.752993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.753130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.753142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.753228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.753239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.753322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.753333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.753503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.753516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.753583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.753593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.753663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.753673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.753826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.753837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.753990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.754002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.754071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.754082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.754221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.754233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.754382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.754393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.754539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.754551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.754718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.754729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.078 qpair failed and we were unable to recover it. 00:27:57.078 [2024-07-12 14:32:48.754883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.078 [2024-07-12 14:32:48.754894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.755048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.755059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.755231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.755243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.755325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.755339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.755413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.755425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.755505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.755515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.755597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.755608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.755752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.755764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.755840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.755851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.755988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.756000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.756069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.756080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.756152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.756163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.756231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.756241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.756324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.756335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.756490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.756502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.756583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.756594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.756675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.756687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.756843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.756855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.756926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.756936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.757077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.757089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.757225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.757237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.757502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.757514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.757587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.757598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.757677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.757688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.757770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.757781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.757955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.757967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.758106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.758117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.758214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.758226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.758312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.758324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.758409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.758421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.758572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.758584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.758670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.758681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.758939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.758951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.759024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.759034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.759167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.759179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.759261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.759272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.759423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.759435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.759523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.759534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.759746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.759757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.759904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.759915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.079 [2024-07-12 14:32:48.759993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.079 [2024-07-12 14:32:48.760005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.079 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.760087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.760099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.760167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.760179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.760246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.760259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.760350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.760361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.760504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.760516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.760584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.760595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.760665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.760677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.760806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.760818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.760921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.760933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.760992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.761003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.761153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.761165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.761364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.761375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.761596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.761608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.761749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.761760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.761960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.761972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.762032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.762043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.762124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.762135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.762207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.762217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.762418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.762431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.762588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.762600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.762665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.762676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.762818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.762830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.763005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.763016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.763155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.763167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.763429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.763441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.763641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.763652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.763787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.763799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.763860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.763871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.764017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.764029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.764277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.764289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.764382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.764394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.764610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.764621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.764758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.764770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.764938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.764949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.765032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.765042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.765212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.765223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.765300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.765311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.765399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.765410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.765495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.765506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.765685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.080 [2024-07-12 14:32:48.765697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.080 qpair failed and we were unable to recover it. 00:27:57.080 [2024-07-12 14:32:48.765778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.765790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.765856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.765866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.765945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.765958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.766189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.766200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.766341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.766353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.766510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.766522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.766729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.766740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.766814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.766825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.766902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.766914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.767062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.767073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.767223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.767234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.767310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.767321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.767398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.767409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.767491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.767503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.767577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.767589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.767744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.767755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.767910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.767922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.768058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.768069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.768161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.768173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.768324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.768335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.768489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.768501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.768650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.768661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.768798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.768809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.768876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.768887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.769021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.769033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.769113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.769124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.769202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.769213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.769288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.769298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.769436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.769448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.769617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.769629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.769783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.769795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.769875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.769887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.769981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.769992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.770059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.770069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.770214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.770226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.770382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.770394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.770526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.770538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.770695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.770706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.770847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.770859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.770972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.770983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.081 qpair failed and we were unable to recover it. 00:27:57.081 [2024-07-12 14:32:48.771141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.081 [2024-07-12 14:32:48.771152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.771228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.771240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.771394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.771408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.771586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.771598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.771710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.771721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.771819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.771830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.771964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.771976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.772179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.772191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.772274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.772285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.772370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.772386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.772561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.772572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.772730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.772741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.772900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.772911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.773049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.773061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.773197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.773208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.773292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.773303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.773521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.773534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.773669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.773681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.773746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.773756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.773908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.773919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.774120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.774131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.774205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.774215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.774428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.774440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.774647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.774659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.774865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.774876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.775026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.775037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.775196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.775207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.775302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.775313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.775408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.775419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.775490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.775500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.775704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.775716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.775854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.775865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.775968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.775979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.776066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.776078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.776213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.776224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.082 [2024-07-12 14:32:48.776396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.082 [2024-07-12 14:32:48.776408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.082 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.776492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.776505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.776583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.776593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.776817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.776830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.776964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.776975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.777175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.777187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.777282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.777294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.777437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.777450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.777606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.777617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.777807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.777819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.777904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.777915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.778116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.778129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.778217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.778228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.778374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.778397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.778538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.778550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.778692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.778703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.778842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.778853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.778948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.778960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.779022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.779032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.779170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.779181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.779314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.779326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.779410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.779421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.779574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.779585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.779663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.779675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.779766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.779778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.779935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.779946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.780017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.780028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.780099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.780109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.780192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.780204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.780353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.780364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.780443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.780453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.780532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.780544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.780622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.780634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.780797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.780809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.780899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.780911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.781063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.781075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.781174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.781187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.781393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.781406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.781560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.781572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.781777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.781789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.781870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.083 [2024-07-12 14:32:48.781881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.083 qpair failed and we were unable to recover it. 00:27:57.083 [2024-07-12 14:32:48.782072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.782084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.782286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.782298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.782397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.782409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.782483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.782493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.782629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.782641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.782726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.782737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.782909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.782923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.783129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.783141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.783225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.783236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.783430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.783442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.783514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.783524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.783601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.783611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.783677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.783687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.783784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.783795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.783923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.783935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.784020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.784031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.784160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.784172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.784325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.784337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.784424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.784436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.784520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.784531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.784763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.784774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.784845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.784855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.785000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.785012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.785096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.785107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.785175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.785185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.785261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.785271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.785358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.785370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.785528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.785540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.785620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.785632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.785715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.785727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.785878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.785889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.786027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.786039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.786195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.786207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.786271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.786282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.786355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.786366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.786537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.786548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.786700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.786711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.786788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.786800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.786957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.786969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.787054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.787066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.084 qpair failed and we were unable to recover it. 00:27:57.084 [2024-07-12 14:32:48.787148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.084 [2024-07-12 14:32:48.787160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.787242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.787254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.787337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.787349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.787513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.787525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.787602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.787612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.787757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.787768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.787847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.787861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.787928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.787938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.788108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.788119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.788213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.788225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.788485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.788498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.788566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.788576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.788723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.788735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.788807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.788817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.788883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.788893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.789048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.789060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.789128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.789138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.789256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.789268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.789357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.789368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.789529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.789542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.789641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.789653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.789799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.789811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.789962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.789974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.790108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.790119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.790218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.790229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.790327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.790338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.790408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.790419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.790522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.790534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.790602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.790612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.790768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.790780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.790933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.790944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.791078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.791089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.791154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.791164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.791302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.791314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.791517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.791529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.791612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.791623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.791705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.791717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.791949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.791960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.792124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.792135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.792219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.792231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.085 [2024-07-12 14:32:48.792431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.085 [2024-07-12 14:32:48.792443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.085 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.792526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.792537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.792700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.792711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.792888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.792900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.793046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.793058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.793195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.793207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.793290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.793303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.793452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.793464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.793611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.793622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.793798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.793810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.793947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.793958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.794104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.794115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.794266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.794278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.794443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.794455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.794596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.794607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.794675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.794685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.794765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.794776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.794845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.794856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.794925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.794936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.795102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.795113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.795255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.795266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.795399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.795411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.795496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.795508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.795592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.795603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.795688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.795699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.795833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.795845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.795909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.795919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.796061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.796073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.796212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.796224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.796425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.796436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.796583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.796595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.796671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.796682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.796817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.796828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.796906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.796917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.797007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.797019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.086 [2024-07-12 14:32:48.797166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.086 [2024-07-12 14:32:48.797178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.086 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.797390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.797401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.797580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.797591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.797748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.797759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.797914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.797925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.798069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.798081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.798220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.798231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.798298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.798309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.798446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.798458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.798662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.798674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.798806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.798818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.798894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.798905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.798981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.798993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.799142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.799153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.799222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.799232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.799321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.799333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.799430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.799442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.799503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.799513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.799649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.799660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.799814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.799825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.799905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.799917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.800104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.800116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.800215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.800227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.800426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.800438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.800528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.800538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.800678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.800690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.800756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.800766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.800862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.800873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.800944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.800954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.801034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.801045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.801200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.801211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.801301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.801314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.801450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.801462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.801549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.801559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.801629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.801642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.801712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.801723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.801822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.801833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.801910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.801921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.802000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.802014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.802190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.802201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.802284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.087 [2024-07-12 14:32:48.802295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.087 qpair failed and we were unable to recover it. 00:27:57.087 [2024-07-12 14:32:48.802370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.802386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.802462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.802472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.802628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.802640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.802777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.802788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.802876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.802887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.803030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.803041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.803121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.803134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.803229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.803240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.803394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.803406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.803563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.803575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.803707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.803718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.803854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.803865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.804001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.804013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.804098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.804109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.804261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.804272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.804350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.804361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.804443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.804455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.804558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.804569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.804640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.804650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.804723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.804734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.804865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.804876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.804963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.804974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.805209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.805220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.805356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.805368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.805590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.805602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.805672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.805682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.805750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.805761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.805969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.805980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.806062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.806073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.806146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.806157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.806254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.806266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.806427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.806438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.806514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.806526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.806731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.806743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.806808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.806819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.806890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.806900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.807144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.807156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.807229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.807243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.807393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.807406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.807498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.807509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.088 [2024-07-12 14:32:48.807597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.088 [2024-07-12 14:32:48.807609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.088 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.807750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.807762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.807861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.807873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.807936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.807947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.808083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.808095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.808232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.808243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.808331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.808343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.808476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.808487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.808694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.808705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.808797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.808808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.809035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.809046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.809134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.809145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.809303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.809314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.809449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.809461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.809592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.809603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.809678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.809688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.809772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.809784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.809851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.809861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.809938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.809949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.810034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.810045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.810118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.810129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.810196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.810206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.810360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.810371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.810526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.810538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.810688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.810700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.810766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.810776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.810852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.810864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.811105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.811116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.811285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.811297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.811466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.811478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.811696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.811707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.811788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.811800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.811887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.811898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.811987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.811999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.812061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.812071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.812156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.812168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.812386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.812398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.812603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.812616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.812753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.812764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.812967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.812978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.089 qpair failed and we were unable to recover it. 00:27:57.089 [2024-07-12 14:32:48.813064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.089 [2024-07-12 14:32:48.813075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.813231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.813242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.813395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.813406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.813498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.813510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.813600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.813612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.813748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.813760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.813825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.813836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.813912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.813924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.814087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.814099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.814250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.814262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.814433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.814444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.814519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.814529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.814612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.814623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.814701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.814712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.814889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.814901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.814988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.815000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.815104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.815116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.815199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.815210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.815342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.815353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.815500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.815512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.815649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.815660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.815738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.815748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.815829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.815841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.815910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.815921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.816069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.816080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.816162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.816174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.816247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.816258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.816390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.816401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.816472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.816483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.816617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.816630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.816729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.816741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.816828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.816840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.816974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.816986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.817189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.817200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.817335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.817347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.817449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.817461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.817527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.817538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.817668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.817682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.817831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.817843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.817916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.817926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.090 [2024-07-12 14:32:48.818085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.090 [2024-07-12 14:32:48.818096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.090 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.818241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.818252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.818331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.818342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.818504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.818516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.818782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.818794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.818866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.818877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.819015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.819026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.819165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.819176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.819340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.819351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.819577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.819588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.819665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.819676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.819810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.819822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.819904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.819916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.819988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.819998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.820135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.820147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.820204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.820214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.820343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.820355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.820454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.820466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.820554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.820566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.820735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.820746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.820912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.820924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.821127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.821138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.821219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.821231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.821370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.821397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.821551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.821563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.821710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.821721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.821806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.821817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.821895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.821905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.822050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.822061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.822126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.822137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.822218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.822229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.822435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.822447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.822584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.822596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.822749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.822761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.822842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.822853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.822922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.091 [2024-07-12 14:32:48.822933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.091 qpair failed and we were unable to recover it. 00:27:57.091 [2024-07-12 14:32:48.823023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.823035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.823124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.823136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.823277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.823288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.823369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.823384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.823452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.823464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.823621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.823632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.823709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.823721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.823810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.823822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.823970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.823982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.824124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.824135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.824287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.824299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.824454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.824467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.824603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.824614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.824838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.824850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.825019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.825030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.825111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.825123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.825261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.825273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.825410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.825421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.825568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.825580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.825717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.825728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.825930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.825942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.826037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.826048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.826176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.826188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.826274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.826285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.826366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.826381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.826468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.826480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.826623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.826634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.826835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.826846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.827020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.827032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.827176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.827187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.827332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.827344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.827436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.827447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.827534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.827545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.827628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.827640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.827724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.827736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.827826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.827837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.827908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.827919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.828024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.828035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.828170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.828181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.092 [2024-07-12 14:32:48.828315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.092 [2024-07-12 14:32:48.828326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.092 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.828392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.828403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.828475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.828489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.828559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.828571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.828710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.828721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.828935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.828947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.829011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.829022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.829097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.829108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.829182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.829192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.829344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.829355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.829433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.829446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.829540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.829551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.829642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.829653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.829862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.829873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.829943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.829955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.830123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.830135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.830233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.830244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.830337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.830349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.830512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.830524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.830652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.830664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.830751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.830763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.830839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.830850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.830922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.830934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.831014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.831025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.831119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.831131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.831270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.831282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.831370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.831385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.831479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.831491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.831560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.831571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.831651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.831662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.831738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.831750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.831815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.831825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.831971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.831983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.832130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.832141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.832212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.832224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.832292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.832303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.832509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.832521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.832580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.832591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.832663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.832673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.832816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.832827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.833036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.093 [2024-07-12 14:32:48.833047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.093 qpair failed and we were unable to recover it. 00:27:57.093 [2024-07-12 14:32:48.833211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.833222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.833309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.833322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.833422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.833434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.833636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.833648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.833734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.833746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.833845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.833856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.833999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.834010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.834145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.834156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.834315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.834326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.834417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.834428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.834506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.834522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.834748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.834759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.834837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.834848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.834997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.835008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.835167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.835178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.835313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.835325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.835467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.835479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.835577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.835588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.835817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.835828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.835956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.835969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.836113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.836124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.836213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.836225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.836389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.836401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.836568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.836579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.836725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.836736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.836873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.836885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.836971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.836982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.837189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.837200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.837290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.837301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.837468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.837480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.837584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.837596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.837681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.837693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.837796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.837807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.837948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.837959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.838039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.838050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.838185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.838196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.838353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.838364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.838446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.838458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.838542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.838553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.838624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.094 [2024-07-12 14:32:48.838635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.094 qpair failed and we were unable to recover it. 00:27:57.094 [2024-07-12 14:32:48.838768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.838779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.838978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.838992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.839078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.839089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.839149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.839159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.839241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.839253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.839341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.839352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.839444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.839456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.839538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.839550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.839684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.839696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.839769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.839780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.839918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.839929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.839996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.840006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.840144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.840155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.840301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.840312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.840471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.840482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.840565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.840576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.840643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.840654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.840809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.840821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.095 [2024-07-12 14:32:48.840898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.095 [2024-07-12 14:32:48.840910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.095 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.841070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.841083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.841169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.841180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.841264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.841276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.841372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.841399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.841527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.841538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.841645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.841657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.841729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.841741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.841812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.841824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.841986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.841997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.842135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.842146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.842296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.842307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.842446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.842458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.842533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.842545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.842648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.842659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.842733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.842743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.842980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.842992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.843057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.843069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.843142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.843153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.843232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.843243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.843325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.843336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.843405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.096 [2024-07-12 14:32:48.843415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.096 qpair failed and we were unable to recover it. 00:27:57.096 [2024-07-12 14:32:48.843660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.843671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.843761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.843774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.843909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.843920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.843981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.843991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.844068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.844078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.844232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.844243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.844326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.844337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.844477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.844488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.844555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.844565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.844721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.844732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.844827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.844839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.844975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.844987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.845082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.845093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.845244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.845256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.845410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.845421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.845495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.845507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.845575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.845587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.845725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.845736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.845809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.845820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.845975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.845986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.846052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.846064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.846200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.846212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.846413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.846425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.846506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.846518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.846661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.846673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.846754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.846766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.846859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.846870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.847025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.847036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.847126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.847138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.847228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.847239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.847392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.847403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.847470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.847480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.847611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.847622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.847763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.847775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.847919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.847931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.848011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.848023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.848175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.848186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.848326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.848339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.848567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.848578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.848662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.848674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.097 [2024-07-12 14:32:48.848822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.097 [2024-07-12 14:32:48.848834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.097 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.849011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.849024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.849189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.849201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.849357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.849369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.849518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.849530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.849621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.849633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.849704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.849716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.849881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.849893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.849989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.850000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.850141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.850153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.850284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.850295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.850392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.850403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.850538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.850549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.850685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.850696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.850848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.850860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.850939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.850950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.851178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.851189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.851272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.851283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.851438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.851451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.851605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.851617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.851688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.851699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.851788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.851799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.852005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.852016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.852103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.852115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.852184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.852195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.852285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.852297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.852362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.852373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.852460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.852471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.852549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.852561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.852628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.852640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.852775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.852787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.852872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.852883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.852971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.852982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.853072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.853084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.853169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.853181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.853250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.853261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.853462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.853474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.853632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.853644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.853746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.853757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.853819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.853829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.853917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.098 [2024-07-12 14:32:48.853930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.098 qpair failed and we were unable to recover it. 00:27:57.098 [2024-07-12 14:32:48.854110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.854124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.854273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.854284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.854369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.854383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.854535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.854547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.854633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.854644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.854709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.854719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.854863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.854874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.854946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.854956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.855158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.855170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.855309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.855321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.855472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.855484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.855577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.855588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.855658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.855669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.855759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.855771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.855914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.855925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.855993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.856004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.856079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.856090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.856233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.856244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.856376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.856391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.856474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.856486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.856631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.856642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.856740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.856751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.856838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.856849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.856935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.856946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.857031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.857042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.857139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.857151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.857301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.857312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.857421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.857449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.857619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.857635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.857785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.857801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.857962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.857977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.858140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.858155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.858309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.858325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.858486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.858499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.858649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.858661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.858813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.858825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.858973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.858985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.859137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.859148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.859349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.859360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.859440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.859451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.099 qpair failed and we were unable to recover it. 00:27:57.099 [2024-07-12 14:32:48.859594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.099 [2024-07-12 14:32:48.859605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.859672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.859683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.859752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.859763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.859969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.859980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.860188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.860199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.860267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.860278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.860427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.860439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.860512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.860522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.860617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.860628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.860718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.860729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.860858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.860869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.860943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.860955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.861114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.861125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.861270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.861281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.861370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.861386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.861597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.861609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.861757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.861768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.861835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.861848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.862052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.862063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.862154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.862165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.862241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.862252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.862329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.862340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.862473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.862484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.862635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.862647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.862725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.862736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.862803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.862814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.862956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.862967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.863049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.863063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.863207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.863219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.863358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.863369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.863508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.863520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.863663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.863674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.863813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.863825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.863902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.863913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.864053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.100 [2024-07-12 14:32:48.864064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.100 qpair failed and we were unable to recover it. 00:27:57.100 [2024-07-12 14:32:48.864135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.864147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.864320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.864332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.864413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.864425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.864526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.864537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.864618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.864630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.864709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.864721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.864952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.864963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.865187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.865198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.865277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.865289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.865435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.865447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.865528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.865540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.865631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.865643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.865773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.865784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.865872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.865883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.866018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.866029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.866093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.866103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.866200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.866212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.866435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.866447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.866531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.866543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.866646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.866658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.866754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.866766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.866842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.866854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.866931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.866942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.867105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.867116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.867207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.867218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.867358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.867369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.867455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.867467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.867676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.867688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.867839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.867850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.868058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.868069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.868154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.868166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.868386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.868397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.868544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.868558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.868739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.868751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.868954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.868966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.869103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.869115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.869245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.869256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.869391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.869403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.869559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.869570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.869808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.869819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.101 qpair failed and we were unable to recover it. 00:27:57.101 [2024-07-12 14:32:48.869991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.101 [2024-07-12 14:32:48.870004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.870149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.870161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.870331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.870342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.870419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.870431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.870503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.870514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.870654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.870665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.870880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.870891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.870993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.871004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.871238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.871249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.871322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.871333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.871399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.871409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.871498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.871509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.871690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.871702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.871778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.871790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.871933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.871944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.872151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.872162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.872231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.872242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.872389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.872401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.872581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.872593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.872681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.872693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.872771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.872782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.872843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.872853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.873063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.873074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.873209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.873220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.873396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.873408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.873546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.873558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.873796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.873808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.873890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.873902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.873981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.873991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.874073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.874084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.874229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.874241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.874330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.874341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.874480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.874494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.874567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.874577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.874682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.874693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.874838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.874850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.874984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.874995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.875079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.875091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.875167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.875177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.875243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.875254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.875337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.102 [2024-07-12 14:32:48.875349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.102 qpair failed and we were unable to recover it. 00:27:57.102 [2024-07-12 14:32:48.875445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.875456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.875539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.875551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.875698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.875709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.875786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.875797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.875883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.875894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.876037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.876049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.876184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.876196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.876270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.876280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.876340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.876351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.876524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.876536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.876737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.876748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.876820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.876832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.876901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.876912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.877053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.877065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.877225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.877237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.877371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.877387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.877457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.877468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.877673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.877685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.877814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.877825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.877956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.877967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.878098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.878110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.878254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.878265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.878363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.878374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.878480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.878492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.878628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.878640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.878732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.878744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.878880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.878892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.878973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.878984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.879068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.879080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.879167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.879178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.879337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.879348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.879491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.879505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.879571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.879581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.879783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.879795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.879867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.879880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.880080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.880092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.880158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.880169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.880240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.880250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.880408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.880420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.880571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.880582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.103 qpair failed and we were unable to recover it. 00:27:57.103 [2024-07-12 14:32:48.880727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.103 [2024-07-12 14:32:48.880738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.880968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.880979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.881142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.881153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.881243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.881254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.881395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.881407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.881628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.881639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.881789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.881800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.881873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.881884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.881950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.881960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.882041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.882053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.882183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.882195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.882261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.882271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.882422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.882434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.882580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.882591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.882732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.882744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.882902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.882915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.883047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.883058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.883208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.883219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.883359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.883370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.883449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.883460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.883592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.883603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.883759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.883771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.883927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.883938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.884017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.884028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.884175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.884186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.884330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.884341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.884485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.884497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.884646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.884657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.884866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.884878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.884981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.884993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.885068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.885080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.885175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.885190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.885275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.885286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.885355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.885365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.885621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.885633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.885717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.885729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.885807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.885818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.885896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.885907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.886041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.104 [2024-07-12 14:32:48.886053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.104 qpair failed and we were unable to recover it. 00:27:57.104 [2024-07-12 14:32:48.886142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.886154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.886247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.886259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.886395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.886406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.886541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.886552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.886617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.886628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.886710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.886722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.886926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.886937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.887003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.887014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.887113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.887125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.887267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.887279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.887383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.887395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.887478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.887490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.887561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.887572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.887649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.887661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.887742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.887754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.887829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.887840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.887976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.887988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.888143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.888155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.888228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.888238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.888307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.888318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.888532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.888545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.888684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.888696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.888845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.888856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.888932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.888943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.889030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.889041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.889144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.889156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.889285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.889296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.889526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.889538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.889688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.889699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.889770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.889780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.889935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.889947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.890016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.890027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.890110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.890123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.890266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.890277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.890490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.890502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.890685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.105 [2024-07-12 14:32:48.890697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.105 qpair failed and we were unable to recover it. 00:27:57.105 [2024-07-12 14:32:48.890766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.890776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.890861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.890873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.891025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.891037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.891197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.891208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.891384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.891395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.891600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.891611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.891755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.891766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.891859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.891870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.892024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.892035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.892184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.892196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.892346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.892357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.892456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.892468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.892561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.892572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.892665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.892677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.892769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.892781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.892885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.892898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.892983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.892995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.893063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.893073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.893152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.893164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.893324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.893336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.893406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.893417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.893520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.893531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.893598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.893609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.893758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.893769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.893855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.893867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.894008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.894019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.894101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.894113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.894243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.894254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.894414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.894426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.894599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.894610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.894679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.894690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.894774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.894785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.894913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.894925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.895099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.895111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.895259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.895271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.895426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.895437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.895515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.895528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.895584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.895594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.895679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.895692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.895760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.895771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.106 qpair failed and we were unable to recover it. 00:27:57.106 [2024-07-12 14:32:48.895931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.106 [2024-07-12 14:32:48.895942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.896085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.896097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.896179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.896191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.896253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.896264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.896345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.896356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.896445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.896459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.896611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.896622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.896705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.896715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.896924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.896935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.897088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.897099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.897183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.897193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.897263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.897274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.897345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.897356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.897550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.897561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.897644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.897655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.897814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.897826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.897959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.897970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.898105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.898117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.898257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.898269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.898329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.898340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.898474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.898485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.898572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.898582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.898743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.898755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.898906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.898917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.898998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.899009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.899107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.899117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.899202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.899212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.899278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.899288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.899387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.899397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.899477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.899487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.899559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.899569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.899739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.899750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.899911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.899922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.900128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.900140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.900220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.900230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.900328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.900338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.900412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.900424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.900567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.900578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.900660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.900671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.900829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.900841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.900915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.900926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.107 [2024-07-12 14:32:48.901018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.107 [2024-07-12 14:32:48.901029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.107 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.901188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.901200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.901334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.901346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.901442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.901453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.901543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.901554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.901705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.901717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.901821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.901832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.901970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.901982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.902119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.902131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.902273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.902285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.902432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.902444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.902675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.902687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.902827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.902839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.903034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.903046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.903263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.903275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.903424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.903437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.903683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.903695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.903926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.903937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.904120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.904132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.904391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.904403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.904536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.904548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.904701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.904713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.904815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.904826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.904906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.904917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.905065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.905075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.905221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.905233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.905301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.905312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.905402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.905413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.905550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.905560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.905696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.905708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.905844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.905856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.905951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.905961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.906107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.906118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.906264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.906276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.906358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.906368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.906456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.906469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.906664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.906676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.906755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.906766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.906839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.906849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.906939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.906949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.907031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.907042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.907242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.907253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.907400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.907413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.907496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.108 [2024-07-12 14:32:48.907506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.108 qpair failed and we were unable to recover it. 00:27:57.108 [2024-07-12 14:32:48.907583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.907594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.907677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.907688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.907756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.907766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.907915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.907926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.908011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.908022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.908106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.908117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.908285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.908295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.908427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.908437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.908586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.908598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.908688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.908698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.908848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.908860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.908941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.908951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.909022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.909032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.909119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.909130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.909200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.909211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.909308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.909318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.909394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.909404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.909468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.909479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.909547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.909557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.909693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.909704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.909788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.909798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.909933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.909943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.910024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.910034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.910126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.910136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.910212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.910222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.910355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.910365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.910433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.910444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.910597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.910608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.109 [2024-07-12 14:32:48.910672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.109 [2024-07-12 14:32:48.910682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.109 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.910750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.910760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.910876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.910886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.911088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.911102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.911160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.911170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.911263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.911274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.911347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.911359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.911449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.911459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.911559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.911570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.911636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.911647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.911800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.911810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.911892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.911902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.911989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.912000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.912169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.912179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.912335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.912345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.912413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.912424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.912521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.912531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.912604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.912615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.912692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.912702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.912854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.912864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.912933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.912943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.913030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.913041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.913179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.913190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.913335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.913345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.913494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.913506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.913643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.913654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.913754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.913766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.913968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.913980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.914133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.914145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.914300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.914311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.914459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.914473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.914727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.914739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.914889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.914901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.914998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.915010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.915087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.915097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.915243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.915254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.915492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.915503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.915641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.915652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.915853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.915864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.915967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.915979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.110 [2024-07-12 14:32:48.916059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.110 [2024-07-12 14:32:48.916071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.110 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.916154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.916166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.916347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.916358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.916445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.916456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.916609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.916621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.916755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.916766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.916851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.916863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.917068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.917080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.917165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.917176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.917255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.917265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.917406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.917417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.917550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.917562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.917636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.917647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.917729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.917740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.917882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.917893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.917976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.917986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.918222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.918234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.918388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.918400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.918482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.918494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.918656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.918667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.918815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.918827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.918901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.918911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.919112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.919124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.919204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.919214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.919305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.919317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.919451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.919462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.919727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.919738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.919819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.919829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.919900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.919910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.919987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.919999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.920151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.920165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.920315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.920327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.920476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.920488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.920697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.920708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.920919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.920930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.920999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.921009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.921094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.921106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.921242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.921253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.921457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.921469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.921632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.921644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.921704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.111 [2024-07-12 14:32:48.921715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.111 qpair failed and we were unable to recover it. 00:27:57.111 [2024-07-12 14:32:48.921810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.921821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.921972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.921983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.922075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.922087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.922225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.922237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.922394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.922405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.922495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.922507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.922641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.922652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.922832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.922843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.922943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.922955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.923123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.923135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.923285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.923296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.923388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.923399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.923609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.923622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.923712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.923724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.923907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.923919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.924069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.924081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.924154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.924165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.924320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.924333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.924469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.924480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.924536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.924546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.924629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.924639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.924843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.924854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.924938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.924950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.925030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.925041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.925109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.925119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.925205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.925217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.925335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.925346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.925503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.925515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.925599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.925611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.925700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.925713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.925861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.925873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.926038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.926049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.926134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.926146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.926229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.926241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.926325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.926337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.926515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.926527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.926672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.926684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.926768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.926779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.926868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.926880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.926958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.926970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.112 [2024-07-12 14:32:48.927068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.112 [2024-07-12 14:32:48.927080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.112 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.927150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.927160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.927295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.927307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.927399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.927411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.927564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.927576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.927716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.927727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.927793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.927804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.927954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.927966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.928119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.928130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.928201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.928211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.928277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.928287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.928374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.928390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.928528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.928540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.928689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.928702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.928780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.928791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.928865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.928876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.928949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.928959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.929111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.929123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.929204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.929216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.929300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.929312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.929393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.929406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.929541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.929552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.929642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.929655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.929750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.929762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.929966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.929979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.930120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.930132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.930204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.930214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.113 qpair failed and we were unable to recover it. 00:27:57.113 [2024-07-12 14:32:48.930302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.113 [2024-07-12 14:32:48.930314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.930411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.930424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.930515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.930529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.930601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.930613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.930680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.930692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.930836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.930848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.930992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.931004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.931153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.931164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.931301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.931312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.931412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.931422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.931494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.931504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.931652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.931663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.931827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.931839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.931985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.931996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.932087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.932099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.932237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.932248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.932402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.932414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.932570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.932582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.932669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.932681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.932757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.932768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.932908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.932920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.933055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.933067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.933140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.933151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.933221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.933231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.933316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.933327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.933478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.933490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.933580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.933592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.933724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.933735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.933880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.933892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.933984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.933996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.934066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.934078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.934213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.934225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.934382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.934394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.934467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.934478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.934557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.934568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.934709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.934719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.934865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.114 [2024-07-12 14:32:48.934877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.114 qpair failed and we were unable to recover it. 00:27:57.114 [2024-07-12 14:32:48.934960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.934972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.935053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.935065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.935139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.935150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.935217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.935229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.935366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.935382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.935452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.935464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.935599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.935610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.935689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.935701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.935771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.935783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.935857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.935869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.935945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.935956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.936025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.936035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.936112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.936123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.936263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.936274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.936356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.936367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.936542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.936554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.936714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.936725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.936809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.936821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.936975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.936986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.937076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.937087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.937166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.937178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.937334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.937346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.937499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.937511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.937592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.937603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.937766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.937777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.937957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.937969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.938036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.938048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.938221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.938232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.938358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.938370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.938459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.938471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.938541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.938553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.938622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.938634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.938789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.938800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.938894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.938906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.938968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.938978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.939047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.939058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.939192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.939204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.939360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.939371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.939473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.939484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.939630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.115 [2024-07-12 14:32:48.939641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.115 qpair failed and we were unable to recover it. 00:27:57.115 [2024-07-12 14:32:48.939725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.939737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.939868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.939879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.940014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.940025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.940162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.940173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.940324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.940335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.940414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.940428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.940507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.940519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.940666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.940677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.940812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.940824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.940902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.940913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.941059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.941071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.941151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.941163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.941257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.941269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.941362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.941373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.941504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.941516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.941597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.941609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.941678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.941690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.941783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.941794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.941867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.941879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.942015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.942027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.942164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.942175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.942325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.942336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.942445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.942457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.942526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.942536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.942611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.942623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.942712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.942724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.942795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.942807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.942964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.942975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.943030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.943040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.943108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.943118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.943241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.943253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.943402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.943414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.943570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.943581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.943803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.943814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.943891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.943902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.944084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.944095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.944231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.944242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.944333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.944351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.944558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.944570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.944705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.116 [2024-07-12 14:32:48.944717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.116 qpair failed and we were unable to recover it. 00:27:57.116 [2024-07-12 14:32:48.944919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.944930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.944998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.945009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.945147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.945158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.945358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.945369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.945618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.945630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.945714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.945727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.945872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.945884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.946039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.946051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.946125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.946136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.946216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.946227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.946428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.946439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.946521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.946532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.946742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.946754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.946888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.946900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.947057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.947069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.947147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.947159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.947321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.947332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.947514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.947525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.947619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.947630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.947779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.947791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.947888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.947899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.948047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.948058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.948156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.948167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.948247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.948259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.948405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.948417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.948486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.948497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.948649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.948660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.948739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.948750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.948838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.948849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.948994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.949006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.949073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.949083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.949150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.949161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.949303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.949315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.949467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.949479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.949620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.949631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.949710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.949722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.949819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.949831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.117 [2024-07-12 14:32:48.949965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.117 [2024-07-12 14:32:48.949976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.117 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.950126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.950138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.950217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.950229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.950395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.950407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.950554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.950566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.950643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.950656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.950880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.950890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.951035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.951045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.951194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.951208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.951345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.951357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.951516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.951529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.951685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.951696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.951840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.951851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.951918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.951929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.952010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.952022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.952106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.952117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.952219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.952231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.952382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.952394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.952474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.952485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.952555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.952567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.952780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.952791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.952861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.952872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.952956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.952968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.953035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.953046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.953113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.953125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.953208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.953219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.953372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.953387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.953473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.953484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.953663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.953675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.953776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.953788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.953931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.953943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.954019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.954031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.954163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.954174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.954318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.954329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.954398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.954409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.118 qpair failed and we were unable to recover it. 00:27:57.118 [2024-07-12 14:32:48.954507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.118 [2024-07-12 14:32:48.954519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.954592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.954602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.954682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.954692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.954795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.954806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.954950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.954961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.955181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.955192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.955276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.955286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.955355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.955367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.955506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.955517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.955751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.955763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.955914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.955926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.956004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.956016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.956085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.956096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.956320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.956334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.956423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.956434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.956589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.956601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.956687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.956698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.956783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.956794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.956897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.956908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.957056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.957067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.957151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.957163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.957306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.957317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.957470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.957481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.957575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.957587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.957656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.957666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.957745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.957756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.957902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.957914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.957986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.957998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.958072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.958082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.958174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.958186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.958259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.958270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.958419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.958431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.958565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.958577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.958653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.958664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.958817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.958828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.958915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.958927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.959016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.959027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.959259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.959271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.959355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.959366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.119 [2024-07-12 14:32:48.959506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.119 [2024-07-12 14:32:48.959519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.119 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.959616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.959641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.959715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.959730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.959805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.959820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.959908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.959924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.960061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.960076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.960175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.960190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.960344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.960357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.960518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.960530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.960678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.960690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.960852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.960864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.960942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.960953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.961052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.961063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.961201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.961213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.961305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.961318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.961519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.961531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.961621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.961632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.961704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.961714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.961790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.961802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.962004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.962015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.962083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.962093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.962242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.962253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.962457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.962469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.962621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.962632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.962762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.962773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.962856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.962867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.963005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.963016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.963119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.963131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.963336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.963347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.963429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.963442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.963576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.963587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.963660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.963672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.963828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.963839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.963928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.963940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.964019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.964030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.964116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.964127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.964209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.964220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.964317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.964329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.964467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.964478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.964619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.964630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.120 [2024-07-12 14:32:48.964711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.120 [2024-07-12 14:32:48.964722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.120 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.964807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.964821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.964906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.964918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.964998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.965009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.965142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.965153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.965235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.965247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.965312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.965322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.965419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.965430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.965604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.965616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.965684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.965696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.965763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.965773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.965922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.965934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.966018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.966029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.966212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.966223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.966383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.966395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.966479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.966490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.966733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.966745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.966877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.966889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.966957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.966967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.967107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.967119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.967274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.967285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.967362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.967374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.967554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.967566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.967659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.967670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.967761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.967772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.968004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.968016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.968086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.968096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.968199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.968211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.968406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.968418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.968519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.968530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.968663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.968675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.968770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.968782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.968860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.968871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.969011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.969022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.969099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.969111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.969179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.969191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.969325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.969337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.969482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.121 [2024-07-12 14:32:48.969494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.121 qpair failed and we were unable to recover it. 00:27:57.121 [2024-07-12 14:32:48.969571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.969583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.969667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.969679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.969750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.969761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.969894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.969907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.970042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.970053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.970202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.970214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.970289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.970301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.970456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.970468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.970606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.970617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.970701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.970712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.970847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.970858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.971023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.971035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.971181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.971193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.971333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.971344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.971497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.971508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.971600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.971612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.971700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.971711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.971857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.971869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.971950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.971961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.972099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.972110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.972192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.972203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.972361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.972372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.972514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.972526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.972696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.972707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.972789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.972799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.972875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.972888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.972969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.972981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.973075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.973086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.973155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.973165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.973312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.973323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.973478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.973489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.973623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.973634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.973796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.973808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.973946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.973957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.122 qpair failed and we were unable to recover it. 00:27:57.122 [2024-07-12 14:32:48.974032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.122 [2024-07-12 14:32:48.974044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.974273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.974284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.974362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.974374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.974471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.974484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.974552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.974563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.974650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.974661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.974757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.974768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.974973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.974985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.975073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.975085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.975164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.975177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.975319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.975330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.975486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.975498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.975589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.975600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.975739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.975750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.975890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.975901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.976155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.976166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.976245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.976256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.976544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.976556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.976620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.976630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.976700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.976712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.976840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.976851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.976987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.976999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.977140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.977152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.977284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.977296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.977389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.977401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.977609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.977621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.977691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.977703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.977871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.977882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.977954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.977965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.978043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.978054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.978190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.978201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.978391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.978403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.978490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.978502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.978589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.978600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.978817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.978828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.979028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.979039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.979179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.979191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.979337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.979349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.979565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.979576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.979654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.979665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.123 qpair failed and we were unable to recover it. 00:27:57.123 [2024-07-12 14:32:48.979851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.123 [2024-07-12 14:32:48.979863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.980021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.980032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.980114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.980125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.980260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.980271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.980414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.980426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.980639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.980651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.980733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.980745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.980858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.980870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.981015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.981027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.981119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.981132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.981268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.981280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.981508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.981520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.981664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.981675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.981777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.981788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.981929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.981940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.982074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.982086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.982183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.982195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.982346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.982357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.982518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.982530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.982678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.982689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.982779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.982791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.982928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.982940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.983021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.983032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.983117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.983128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.983265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.983277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.983346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.983357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.983450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.983462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.983674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.983685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.983840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.983850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.984005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.984017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.984106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.984118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.984268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.984279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.984414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.984425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.984505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.984517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.984721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.984733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.984878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.984899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.985049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.985061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.985134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.985145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.985282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.985294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.124 [2024-07-12 14:32:48.985379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.124 [2024-07-12 14:32:48.985391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.124 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.985540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.985552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.985638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.985650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.985726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.985736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.985811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.985821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.985954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.985965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.986032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.986042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.986184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.986195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.986277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.986289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.986371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.986396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.986543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.986556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.986645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.986656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.986859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.986870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.987005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.987016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.987112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.987124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.987275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.987287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.987358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.987368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.987563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.987575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.987655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.987667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.987739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.987749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.987883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.987895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.987965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.987975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.988078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.988089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.988242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.988253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.988487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.988499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.988568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.988578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.988729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.988741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.988798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.988809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.988900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.988912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.989048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.989059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.989140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.989151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.989298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.989310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.989393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.989405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.989575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.989587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.989728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.989739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.989889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.989900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.990036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.990047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.990121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.990132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.125 [2024-07-12 14:32:48.990197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.125 [2024-07-12 14:32:48.990207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.125 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.990291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.990303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.990450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.990461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.990615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.990627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.990789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.990801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.990935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.990946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.991019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.991029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.991179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.991191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.991281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.991292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.991448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.991460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.991549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.991560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.991729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.991740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.991881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.991894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.992038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.992050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.992195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.992206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.992293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.992305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.992442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.992454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.992589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.992602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.992673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.992684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.992834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.992846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.992902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.992913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.993051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.993062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.993215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.993227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.993313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.993324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.993472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.993484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.993555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.993566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.993724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.993736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.993814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.993825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.993895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.993906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.994032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.994043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.994192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.994204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.994352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.994363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.994454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.994465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.994531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.994541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.994629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.994641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.994778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.994790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.994854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.994864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.994931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.994942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.995120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.995131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-12 14:32:48.995222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.126 [2024-07-12 14:32:48.995234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.995301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.995311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.995444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.995457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.995606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.995617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.995697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.995708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.995793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.995804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.995884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.995896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.996039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.996050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.996201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.996213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.996362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.996373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.996453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.996465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.996539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.996551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.996620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.996631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.996794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.996807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.996941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.996953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.997104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.997116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.997212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.997223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.997372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.997389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.997539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.997551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.997622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.997633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.997719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.997731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.997807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.997819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.997896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.997908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.997979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.997991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.998129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.998140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.998274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.998286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.998365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.998380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.998450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.998460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.998532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.998544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.998612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.998622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.998784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.998796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.998874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.998886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.998957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.998968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.999037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.999050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.999132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.999144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-12 14:32:48.999238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.127 [2024-07-12 14:32:48.999249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:48.999336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:48.999347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:48.999500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:48.999512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:48.999574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:48.999584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:48.999715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:48.999726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:48.999941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:48.999953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.000054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.000065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.000145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.000156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.000238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.000250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.000317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.000327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.000529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.000540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.000678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.000690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.000767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.000778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.000874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.000886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.000982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.000993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.001085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.001097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.001174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.001186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.001331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.001342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.001488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.001502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.001641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.001653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.001720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.001730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.001802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.001813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.001955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.001966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.002055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.002066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.002142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.002153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.002363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.002375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.002481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.002493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.002589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.002601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.002667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.002677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.002818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.002829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.002922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.002934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.003068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.003080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.003173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.003184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.003323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.003334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.003483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.003495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.003563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.003572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.003786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.003798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.003884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.003895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.004036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.004047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.004148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.004159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.004297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.128 [2024-07-12 14:32:49.004309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.128 qpair failed and we were unable to recover it. 00:27:57.128 [2024-07-12 14:32:49.004391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.004403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.004546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.004558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.004636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.004647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.004728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.004739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.004951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.004962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.005132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.005144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.005226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.005238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.005320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.005332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.005467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.005478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.005627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.005638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.005848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.005860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.006011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.006022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.006160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.006171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.006246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.006257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.006401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.006412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.006620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.006631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.006845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.006856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.006939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.006953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.007037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.007048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.007201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.007213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.007358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.007369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.007459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.007470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.007602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.007614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.007699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.007710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.007782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.007793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.007967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.007978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.008108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.008119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.008200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.008211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.008270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.008279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.008483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.008495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.008587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.008598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.008704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.008715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.008812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.008823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.008903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.008914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.009140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.009152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.009228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.009239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.009319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.009330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.009491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.009502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.009639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.009650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.009729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.129 [2024-07-12 14:32:49.009740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.129 qpair failed and we were unable to recover it. 00:27:57.129 [2024-07-12 14:32:49.009835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.009846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.009998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.010009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.010093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.010104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.010308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.010319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.010407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.010419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.010487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.010499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.010712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.010724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.010872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.010883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.011016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.011028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.011162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.011173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.011325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.011336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.011472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.011483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.011554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.011565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.011729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.011741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.011829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.011840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.011908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.011920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.011989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.012001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.012072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.012085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.012217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.012228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.012315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.012326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.012392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.012403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.012534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.012546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.012644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.012655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.012792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.012803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.012871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.012882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.013086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.013098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.013177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.013188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.013333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.013345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.013482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.013494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.013632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.013643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.013740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.013751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.013891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.013903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.013980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.013991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.014137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.014149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.014221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.014233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.014384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.014396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.014534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.014545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.014699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.014710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.014791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.014801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.014870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.014881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.015016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.130 [2024-07-12 14:32:49.015027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.130 qpair failed and we were unable to recover it. 00:27:57.130 [2024-07-12 14:32:49.015111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.015122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.015260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.015272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.015369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.015384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.015508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.015541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.015654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.015687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.015864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.015889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.016032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.016045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.016131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.016142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.016215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.016226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.016340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.016351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.016520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.016531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.016686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.016698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.016848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.016859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.017008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.017019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.017220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.017232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.017311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.017322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.017399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.017413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.017526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.017537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.017670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.017681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.017764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.017775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.017842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.017851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.017928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.017939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.018082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.018094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.018230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.018241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.018401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.018413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.018510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.018522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.018611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.018622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.018719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.018731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.018800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.018812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.018883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.018893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.018976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.018987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.019192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.019204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.019358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.019370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.019509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.019520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.131 qpair failed and we were unable to recover it. 00:27:57.131 [2024-07-12 14:32:49.019653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.131 [2024-07-12 14:32:49.019665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.019739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.019751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.019823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.019833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.019912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.019924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.020015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.020026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.020118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.020129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.020269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.020281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.020371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.020387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.020472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.020484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.020587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.020606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.020704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.020720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.020813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.020829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.020916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.020932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.021009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.021025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.021166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.021181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.021353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.021368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.021452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.021468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.021631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.021646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.021823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.021839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.021947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.021962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.022103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.022118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.022296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.022312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.022522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.022543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.022806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.022821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.022936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.022951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.023057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.023072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.023167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.023183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.023257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.023272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.023439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.023455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.023529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.023544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.023621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.023637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.023796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.023811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.023962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.023977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.024073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.024086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.024233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.024245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.024332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.024344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.024431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.024443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.024528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.024540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.024672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.024684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.024814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.024825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.132 [2024-07-12 14:32:49.024957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.132 [2024-07-12 14:32:49.024968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.132 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.025062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.025073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.025146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.025157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.025226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.025237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.025329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.025341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.025433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.025444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.025520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.025532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.025618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.025630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.025703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.025714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.025852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.025865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.026012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.026023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.026161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.026172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.026259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.026270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.026410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.026422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.026525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.026537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.026611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.026622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.026757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.026769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.026942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.026954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.027179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.027191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.027360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.027372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.027461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.027473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.027549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.027561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.027777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.027789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.027873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.027885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.028045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.028056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.028198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.028210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.028279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.028289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.028445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.028457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.028529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.028540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.028678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.028689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.028823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.028835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.028912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.028923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.029075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.029087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.029226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.029237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.029306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.029317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.029397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.029409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.029544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.029555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.029631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.029642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.029726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.029738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.029830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.029842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.029920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.029931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.133 [2024-07-12 14:32:49.030016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.133 [2024-07-12 14:32:49.030027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.133 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.030092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.030102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.030239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.030251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.030399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.030410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.030553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.030564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.030647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.030658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.030908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.030920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.031059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.031070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.031211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.031224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.031328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.031339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.031603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.031615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.031817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.031828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.031981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.031993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.032071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.032083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.032151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.032162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.032312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.032323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.032469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.032480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.032630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.032641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.032754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.032765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.032836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.032846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.032932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.032943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.033045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.033056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.033243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.033255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.033367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.033389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.033466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.033478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.033561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.033572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.033659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.033671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.033735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.033746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.033881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.033892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.034025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.034036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.034099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.034109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.034244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.034255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.034392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.034404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.034561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.034572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.034675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.034686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.034770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.034782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.034916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.034927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.035006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.035018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.134 qpair failed and we were unable to recover it. 00:27:57.134 [2024-07-12 14:32:49.035111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.134 [2024-07-12 14:32:49.035123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.035266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.035278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.035426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.035437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.035514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.035525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.035680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.035692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.035829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.035840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.035919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.035930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.036001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.036013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.036090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.036101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.036215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.036226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.036416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.036430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.036581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.036593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.036675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.036687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.036776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.036786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.036854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.036865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.037004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.037016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.037170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.037182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.037266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.037278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.037366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.037380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.037469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.037480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.037627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.037639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.037708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.037718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.037863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.037875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.038010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.038022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.038137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.038149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.038276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.038306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.038517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.038549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.038833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.038864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.039173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.039203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.039348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.039385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.039578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.039609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.039803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.039833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.040035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.040065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.040241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.040272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.040549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.040580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.040714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.040744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.040882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.040893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.135 [2024-07-12 14:32:49.041095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.135 [2024-07-12 14:32:49.041107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.135 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.041310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.041321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.041567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.041579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.041667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.041697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.041829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.041859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.042003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.042034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.042148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.042179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.042322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.042352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.042562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.042602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.042848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.042864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.043026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.043042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.043251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.043266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.043407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.043423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.043573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.043613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.043793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.043824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.043951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.043982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.044180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.044211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.044390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.044422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.044664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.044679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.044822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.044838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.044912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.044925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.136 [2024-07-12 14:32:49.045001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.136 [2024-07-12 14:32:49.045013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.136 qpair failed and we were unable to recover it. 00:27:57.416 [2024-07-12 14:32:49.045162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.416 [2024-07-12 14:32:49.045174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.416 qpair failed and we were unable to recover it. 00:27:57.416 [2024-07-12 14:32:49.045301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.416 [2024-07-12 14:32:49.045312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.416 qpair failed and we were unable to recover it. 00:27:57.416 [2024-07-12 14:32:49.045389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.416 [2024-07-12 14:32:49.045400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.416 qpair failed and we were unable to recover it. 00:27:57.416 [2024-07-12 14:32:49.045565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.416 [2024-07-12 14:32:49.045577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.416 qpair failed and we were unable to recover it. 00:27:57.416 [2024-07-12 14:32:49.045650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.416 [2024-07-12 14:32:49.045661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.416 qpair failed and we were unable to recover it. 00:27:57.416 [2024-07-12 14:32:49.045817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.416 [2024-07-12 14:32:49.045829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.416 qpair failed and we were unable to recover it. 00:27:57.416 [2024-07-12 14:32:49.045919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.416 [2024-07-12 14:32:49.045930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.416 qpair failed and we were unable to recover it. 00:27:57.416 [2024-07-12 14:32:49.046031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.416 [2024-07-12 14:32:49.046043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.416 qpair failed and we were unable to recover it. 00:27:57.416 [2024-07-12 14:32:49.046193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.416 [2024-07-12 14:32:49.046205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.416 qpair failed and we were unable to recover it. 00:27:57.416 [2024-07-12 14:32:49.046273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.416 [2024-07-12 14:32:49.046283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.416 qpair failed and we were unable to recover it. 00:27:57.416 [2024-07-12 14:32:49.046376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.416 [2024-07-12 14:32:49.046392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.416 qpair failed and we were unable to recover it. 00:27:57.416 [2024-07-12 14:32:49.046495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.416 [2024-07-12 14:32:49.046507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.416 qpair failed and we were unable to recover it. 00:27:57.416 [2024-07-12 14:32:49.046605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.416 [2024-07-12 14:32:49.046617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.416 qpair failed and we were unable to recover it. 00:27:57.416 [2024-07-12 14:32:49.046754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.416 [2024-07-12 14:32:49.046766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.046843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.046854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.046925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.046936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.047078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.047090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.047269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.047281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.047367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.047398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.047501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.047518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.047677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.047692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.047833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.047849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.048013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.048028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.048115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.048130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.048339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.048355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.048466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.048482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.048558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.048572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.048810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.048826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.048921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.048936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.049082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.049098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.049198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.049211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.049350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.049362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.049516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.049528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.049619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.049631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.049732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.049743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.049837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.049848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.049929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.049941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.050091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.050104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.050250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.050262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.050356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.050368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.050597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.050609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.050745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.050757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.050930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.050941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.051031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.051043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.051205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.051216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.051402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.051415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.051482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.051493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.051581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.051593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.051677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.051689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.051772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.051784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.051860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.051870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.052001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.052013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.052093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.052106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.052246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.052258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.052360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.052371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.052580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.052592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.052675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.052686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.052877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.052889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.053028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.053042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.053279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.053309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.053504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.053535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.053843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.053873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.054143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.054174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.054297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.054327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.054524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.054555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.054763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.054775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.054941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.054971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.055214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.055245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.055441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.055474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.055650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.055662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.055822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.055853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.056041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.056072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.056322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.056352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.056587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.056619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.056856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.056886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.057012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.057024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.057159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.057194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.057304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.057335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.057616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.057647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.057838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.057868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.057993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.058024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.058203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.058233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.058410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.058441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.058622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.058652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.058847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.058878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.059072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.059103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.059287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.059317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.059496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.059529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.059767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.059798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.417 [2024-07-12 14:32:49.059921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.417 [2024-07-12 14:32:49.059951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.417 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.060073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.060084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.060221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.060233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.060409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.060420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.060567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.060580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.060650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.060661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.060799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.060830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.060952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.060983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.061177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.061208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.061399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.061435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.061564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.061594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.061721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.061752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.061964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.061995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.062197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.062229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.062436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.062467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.062591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.062602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.062751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.062763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.062834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.062844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.063038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.063068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.063189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.063220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.063465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.063497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.063692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.063722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.063956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.063987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.064245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.064277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.064484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.064517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.064670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.064701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.064828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.064858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.065033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.065046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.065215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.065246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.065394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.065426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.065608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.065619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.065702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.065713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.065864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.065896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.066098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.066128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.066304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.066335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.066471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.066502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.066704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.066734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.067008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.067039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.067250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.067280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.067527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.067558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.067806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.067837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.068015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.068046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.068257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.068289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.068499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.068531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.068728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.068759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.068963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.068994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.069195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.069226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.069339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.069370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.069640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.069671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.069935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.069971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.070091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.070103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.070179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.070190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.070271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.070282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.070426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.070457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.070596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.070626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.070894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.070943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.071083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.071113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.071229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.071260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.071392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.071423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.071614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.071645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.071842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.071873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.072109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.072121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.072268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.072298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.072445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.072478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.072695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.072726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.072896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.072907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.073074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.073086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.073233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.073245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.073455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.073487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.073672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.073703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.073820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.073852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.073960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.073972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.074124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.074149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.074353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.074392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.074575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.074607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.074798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.074810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.075015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.075046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.075297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.075327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.075602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.418 [2024-07-12 14:32:49.075633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.418 qpair failed and we were unable to recover it. 00:27:57.418 [2024-07-12 14:32:49.075830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.075841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.076063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.076093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.076348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.076388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.076500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.076532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.076631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.076662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.076843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.076873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.077130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.077161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.077405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.077437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.077624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.077655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.077835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.077867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.078005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.078040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.078229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.078260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.078438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.078470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.078681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.078692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.078898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.078928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.079054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.079085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.079268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.079298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.079471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.079502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.079700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.079731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.079846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.079877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.080077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.080107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.080286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.080317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.080452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.080483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.080701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.080732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.080926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.080957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.081202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.081233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.081358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.081400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.081646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.081677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.081879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.081890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.081973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.082002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.082140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.082170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.082352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.082401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.082647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.082678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.082845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.082857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.083015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.083046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.083167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.083197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.083298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.083329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.083589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.083659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.083831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.083848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.083940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.083954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.084111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.084127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.084312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.084343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.084541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.084573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.084819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.084851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.085040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.085071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.085355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.085399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.085579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.085610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.085813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.085843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.086087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.086118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.086399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.086431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.086563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.086603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.086765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.086780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.086958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.086989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.087124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.087155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.087284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.087315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.087496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.087531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.087709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.087740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.087996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.088028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.088150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.088165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.088238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.088253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.088345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.088374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.088566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.088597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.088730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.088762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.088962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.088993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.089270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.089301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.089431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.089464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.089578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.089610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.419 [2024-07-12 14:32:49.089807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.419 [2024-07-12 14:32:49.089851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.419 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.090082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.090099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.090260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.090275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.090395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.090426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.090620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.090652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.090849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.090880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.091127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.091158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.091422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.091456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.091720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.091751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.091919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.091935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.092154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.092186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.092375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.092414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.092609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.092641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.092905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.092921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.093145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.093161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.093254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.093268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.093489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.093520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.093634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.093665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.093853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.093883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.094031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.094048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.094126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.094141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.094366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.094410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.094682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.094712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.094918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.094958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.095095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.095125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.095317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.095347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.095498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.095530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.095662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.095702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.095798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.095809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.095978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.096009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.096196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.096226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.096409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.096442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.096577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.096607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.096787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.096818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.097001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.097032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.097154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.097185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.097385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.097418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.097599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.097631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.097877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.097908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.098026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.098057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.098301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.098332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.098520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.098552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.098676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.098707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.098904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.098935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.099193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.099224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.099347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.099387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.099570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.099601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.099794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.099825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.099962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.099993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.100173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.100204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.100395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.100428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.100673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.100703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.100826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.100856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.100961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.100971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.101108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.101120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.101175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.101185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.101273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.101283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.101351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.101361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.101488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.101518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.101695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.101724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.101967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.101997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.102203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.102234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.102441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.102473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.102606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.102636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.102773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.102785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.102938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.102950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.103032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.103042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.103166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.103197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.103383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.103415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.103593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.103624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.103746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.103776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.103953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.103964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.104053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.104093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.420 qpair failed and we were unable to recover it. 00:27:57.420 [2024-07-12 14:32:49.104218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.420 [2024-07-12 14:32:49.104249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.104356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.104396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.104506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.104537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.104711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.104743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.104870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.104901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.105101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.105112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.105345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.105356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.105507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.105519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.105676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.105707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.105832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.105863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.105990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.106020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.106215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.106246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.106498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.106530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.106678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.106709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.106888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.106918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.107103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.107134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.107266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.107297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.107457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.107493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.107686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.107716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.107970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.108001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.108198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.108229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.108423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.108454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.108570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.108602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.108729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.108760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.109028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.109059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.109305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.109337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.109559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.109591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.109827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.109838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.109996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.110027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.110203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.110233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.110421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.110453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.110651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.110682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.110993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.111023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.111281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.111312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.111492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.111523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.111796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.111827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.112016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.112048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.112173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.112185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.112298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.112310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.112385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.112397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.112536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.112548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.112634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.112644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.112816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.112828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.112900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.112911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.113010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.113021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.113091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.113125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.113306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.113336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.113474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.113506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.113621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.113651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.113832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.113863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.114047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.114059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.114174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.114186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.114343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.114354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.114500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.114532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.114744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.114775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.114953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.114984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.115160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.115172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.115321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.115334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.115649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.115681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.115796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.115827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.116024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.116035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.116208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.116220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.116291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.116301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.116412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.116449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.116552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.116570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.116714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.421 [2024-07-12 14:32:49.116730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.421 qpair failed and we were unable to recover it. 00:27:57.421 [2024-07-12 14:32:49.116807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.116822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.116964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.116980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.117068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.117083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.117158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.117172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.117317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.117333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.117424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.117448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.117529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.117539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.117619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.117629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.117803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.117815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.117884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.117894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.118037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.118047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.118249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.118262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.118355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.118365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.118450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.118461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.118618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.118650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.118770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.118800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.119001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.119031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.119284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.119315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.119513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.119545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.119673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.119704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.119820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.119850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.119978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.120010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.120206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.120237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.120414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.120446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.120688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.120719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.120899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.120931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.121193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.121224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.121472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.121504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.121609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.121640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.121762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.121792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.121978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.122009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.122192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.122229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.122489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.122521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.122786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.122827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.122922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.122933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.123018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.123028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.123172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.123203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.123395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.123426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.123642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.123673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.123862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.123874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.124054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.124084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.124197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.124228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.124426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.124458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.124656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.124687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.124826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.124857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.125049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.125080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.125256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.125288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.125411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.125442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.125686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.125717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.125846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.125877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.126120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.126150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.126355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.126393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.126581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.126612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.126788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.126818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.127054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.127084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.127290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.127321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.127508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.127539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.127662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.127694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.127939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.127970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.128098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.128129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.128400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.128412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.128490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.128500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.128582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.128592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.128728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.128738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.128892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.128904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.129051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.129083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.129213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.129243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.129491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.129522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.129699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.129730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.129924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.129955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.130171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.130201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.130325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.130361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.130612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.130642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.130908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.130920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.131066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.422 [2024-07-12 14:32:49.131078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.422 qpair failed and we were unable to recover it. 00:27:57.422 [2024-07-12 14:32:49.131215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.131246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.131385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.131416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.131544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.131574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.131756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.131787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.132025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.132037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.132192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.132223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.132417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.132449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.132652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.132682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.132947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.132977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.133155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.133186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.133329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.133360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.133624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.133656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.133765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.133795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.133936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.133966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.134145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.134176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.134354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.134392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.134535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.134566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.134757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.134787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.134968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.135009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.135151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.135163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.135322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.135352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.135542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.135573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.135767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.135799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.136018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.136030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.136167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.136179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.136323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.136366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.136573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.136604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.136889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.136920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.137144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.137174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.137423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.137455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.137566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.137596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.137715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.137746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.137922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.137953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.138173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.138205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.138452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.138483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.138615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.138646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.138770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.138807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.138998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.139029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.139264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.139275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.139452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.139483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.139612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.139643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.139784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.139814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.140013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.140025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.140186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.140217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.140414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.140447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.140631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.140663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.140787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.140818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.141098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.141129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.141305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.141336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.141527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.141559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.141775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.141806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.142027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.142057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.142178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.142210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.142460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.142472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.142615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.142627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.142701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.142712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.142847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.142858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.143064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.143095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.143277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.143308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.143418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.143451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.143580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.143611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.143809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.143840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.144033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.144068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.144205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.144217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.144389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.423 [2024-07-12 14:32:49.144420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.423 qpair failed and we were unable to recover it. 00:27:57.423 [2024-07-12 14:32:49.144554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.144588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.144861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.144892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.144993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.145004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.145210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.145222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.145386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.145398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.145467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.145477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.145566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.145576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.145654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.145664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.145755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.145766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.145919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.145931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.146081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.146093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.146228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.146241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.146448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.146460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.146535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.146545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.146628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.146638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.146720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.146731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.146794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.146805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.147007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.147019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.147163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.147194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.147342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.147373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.147509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.147540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.147658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.147688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.147833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.147864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.147987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.148018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.148138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.148149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.148284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.148295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.148512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.148524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.148681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.148711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.148902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.148933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.149061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.149092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.149341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.149353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.149525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.149536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.149762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.149774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.149922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.149933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.150088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.150119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.150398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.150429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.150611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.150642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.150886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.150916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.151032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.151062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.151280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.151310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.151453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.151485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.151683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.151714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.151888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.151919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.152034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.152065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.152352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.152390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.152537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.152567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.152764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.152795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.152909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.152939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.153112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.153143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.153252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.153284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.153544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.153575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.153821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.153857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.154051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.154082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.154207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.154238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.154418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.154449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.154619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.154650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.154922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.154953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.155080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.155110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.155323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.155353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.155483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.155514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.155627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.155659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.155838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.155869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.156135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.156165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.156366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.156386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.156470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.156480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.156626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.156638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.156720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.156731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.156897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.156908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.157060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.157090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.157271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.157302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.157403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.157435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.157614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.157646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.157842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.157872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.158094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.424 [2024-07-12 14:32:49.158129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.424 qpair failed and we were unable to recover it. 00:27:57.424 [2024-07-12 14:32:49.158212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.158222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.158367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.158404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.158610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.158641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.158762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.158792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.158954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.158988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.159183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.159251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.159466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.159503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.159649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.159681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.159953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.159984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.160160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.160192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.160436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.160453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.160618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.160633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.160833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.160864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.161042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.161073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.161336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.161368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.161502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.161534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.161776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.161808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.161928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.161947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.162129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.162160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.162348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.162387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.162653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.162685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.162931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.162963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.163138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.163154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.163270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.163302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.163493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.163526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.163716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.163747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.163904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.163934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.164199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.164231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.164470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.164504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.164748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.164779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.164983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.164999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.165155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.165171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.165395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.165427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.165693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.165725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.165907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.165938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.166125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.166156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.166299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.166337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.166574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.166590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.166823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.166838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.167075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.167090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.167325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.167341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.167554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.167570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.167745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.167760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.167975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.167991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.168185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.168216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.168416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.168449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.168652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.168682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.168924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.168955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.169203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.169234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.169482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.169499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.169719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.169735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.169910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.169926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.170076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.170107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.170367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.170407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.170584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.170615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.170860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.170891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.171134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.171150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.171331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.171348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.171490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.171505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.171655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.171686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.171865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.171897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.172089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.172119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.172310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.172326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.172517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.172548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.172828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.172860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.173003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.173043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.173276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.173291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.173523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.173540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.173653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.173668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.173828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.173860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.174040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.174071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.174288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.174319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.174608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.174641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.174914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.174946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.175161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.425 [2024-07-12 14:32:49.175192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.425 qpair failed and we were unable to recover it. 00:27:57.425 [2024-07-12 14:32:49.175370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.175408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.175626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.175657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.175800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.175831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.176096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.176126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.176423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.176454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.176700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.176731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.176855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.176885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.177168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.177199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.177395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.177427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.177788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.177855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.178014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.178049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.178328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.178343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.178576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.178592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.178821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.178837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.179092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.179107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.179213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.179229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.179463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.179495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.179765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.179796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.179994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.180025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.180171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.180201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.180399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.180430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.180678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.180710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.180983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.181013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.181171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.181202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.181446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.181480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.181605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.181634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.181889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.181920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.182136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.182167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.182424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.182439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.182535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.182549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.182769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.182784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.182944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.182960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.183105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.183120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.183263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.183278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.183443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.183459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.183671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.183687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.183851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.183869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.183968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.183982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.184153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.184169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.184315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.184330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.184508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.184524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.184669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.184684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.184921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.184937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.185162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.185178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.185346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.185361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.185523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.185540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.185694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.185710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.185945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.185960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.186172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.186188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.186431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.186448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.186711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.186728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.187104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.187123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.187294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.187312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.187535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.187551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.187761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.187776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.188007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.188023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.188119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.188134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.188288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.188304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.188409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.188424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.188602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.188617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.188827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.188842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.188946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.188961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.189158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.189174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.189404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.189424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.189526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.189541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.189653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.189667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.426 [2024-07-12 14:32:49.189779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.426 [2024-07-12 14:32:49.189794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.426 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.190025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.190041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.190231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.190247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.190482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.190498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.190658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.190673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.190906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.190923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.191119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.191134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.191368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.191387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.191575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.191592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.191770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.191786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.192003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.192018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.192162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.192179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.192413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.192429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.192639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.192654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.192830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.192845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.192947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.192961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.193115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.193131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.193371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.193390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.193638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.193654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.193880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.193896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.194059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.194074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.194246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.194262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.194511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.194528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.194711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.194727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.194966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.194986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.195150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.195165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.195330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.195345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.195565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.195581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.195742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.195759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.195853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.195868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.196018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.196034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.196272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.196287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.196538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.196555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.196782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.196799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.196960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.196976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.197162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.197177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.197331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.197347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.197514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.197530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.197693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.197709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.197917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.197932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.198168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.198183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.198458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.198474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.198635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.198651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.198906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.198922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.199018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.199032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.199186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.199201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.199433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.199449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.199603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.199618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.199759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.199775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.199934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.199950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.200165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.200180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.200351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.200366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.200528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.200545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.200699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.200715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.200864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.200879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.201092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.201108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.201314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.201329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.201497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.201513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.201724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.201739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.201884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.201899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.202080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.202096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.202306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.202321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.202490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.202506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.202739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.202754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.202906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.202922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.203177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.203211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.203394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.203428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.203695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.203712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.203926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.203942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.204152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.204168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.204382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.204399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.427 qpair failed and we were unable to recover it. 00:27:57.427 [2024-07-12 14:32:49.204622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.427 [2024-07-12 14:32:49.204637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.204815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.204831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.204984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.205000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.205156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.205172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.205257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.205271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.205424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.205440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.205672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.205688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.205925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.205945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.206089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.206105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.206288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.206304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.206464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.206480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.206699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.206715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.206962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.206978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.207122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.207138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.207303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.207318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.207527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.207543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.207749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.207764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.207937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.207952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.208160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.208176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.208321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.208337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.208451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.208465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.208610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.208626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.208834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.208849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.209030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.209046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.209274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.209289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.209452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.209468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.209631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.209647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.209906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.209922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.210158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.210173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.210359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.210375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.210614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.210630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.210837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.210853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.211026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.211041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.211236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.211252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.211518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.211542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.211722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.211750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.212012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.212024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.212176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.212188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.212282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.212293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.212445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.212458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.212663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.212675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.212838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.212851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.213007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.213019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.213241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.213253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.213425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.213438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.213642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.213654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.213792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.213805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.214026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.214039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.214219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.214231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.214320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.214331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.214551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.214564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.214700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.214712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.214921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.214933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.215094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.215106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.215252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.215264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.215421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.215433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.215679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.215691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.215904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.215916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.216070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.428 [2024-07-12 14:32:49.216082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.428 qpair failed and we were unable to recover it. 00:27:57.428 [2024-07-12 14:32:49.216177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.216188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.216284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.216295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.216476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.216488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.216711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.216723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.216815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.216826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.217025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.217037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.217284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.217298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.217519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.217531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.217687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.217699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.217804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.217815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.217967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.217978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.218122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.218132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.218336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.218347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.218448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.218459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.218609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.218621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.218758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.218772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.218908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.218920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.219074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.219085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.219180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.219191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.219280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.219291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.219533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.219545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.219688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.219701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.219938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.219951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.220099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.220110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.220283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.220296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.220458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.220471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.220571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.220583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.220798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.220809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.221044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.221055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.221207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.221219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.221432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.221444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.221666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.221679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.221782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.221793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.222047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.222059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.222288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.222300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.222450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.222463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.222678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.222689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.222773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.222784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.222880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.222890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.223055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.223066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.223216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.223228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.223501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.223513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.223726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.223738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.223818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.223829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.224039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.224051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.224187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.224199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.224329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.224341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.224418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.224429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.224592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.224604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.224827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.224839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.225075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.225087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.225287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.225299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.225496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.225509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.225661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.225672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.225875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.225887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.226074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.226089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.226225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.226237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.226331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.226342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.226435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.226446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.226652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.226664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.226839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.226850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.226946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.226956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.227193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.227205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.429 [2024-07-12 14:32:49.227423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.429 [2024-07-12 14:32:49.227435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.429 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.227536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.227546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.227778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.227790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.227869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.227880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.228125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.228137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.228282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.228294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.228457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.228469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.228617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.228629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.228775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.228787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.228964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.228976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.229113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.229125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.229327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.229339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.229501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.229513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.229748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.229760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.229851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.229863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.230016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.230027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.230273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.230285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.230432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.230445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.230583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.230594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.230738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.230750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.230950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.230962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.231185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.231197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.231406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.231418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.231520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.231531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.231673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.231684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.231841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.231853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.232004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.232016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.232176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.232188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.232420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.232432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.232641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.232653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.232808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.232820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.232902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.232912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.233077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.233091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.233228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.233240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.233460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.233472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.233618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.233631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.233728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.233738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.233898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.233908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.234069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.234081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.234166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.234175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.234255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.234265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.234417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.234429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.234576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.234588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.234841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.234853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.234999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.235011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.235093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.235104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.235334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.235347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.235483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.235495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.235672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.235684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.235820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.235832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.236052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.236063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.236198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.236209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.236364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.236376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.236509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.236521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.236684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.236696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.236834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.236846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.237094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.237106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.237302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.237313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.237469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.237481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.237642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.237654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.237821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.237833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.237924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.237935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.238163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.238175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.238275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.238285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.238448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.238460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.238612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.238624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.430 [2024-07-12 14:32:49.238775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.430 [2024-07-12 14:32:49.238787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.430 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.238962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.238974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.239197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.239209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.239364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.239376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.239560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.239573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.239660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.239670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.239761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.239774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.239920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.239931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.240083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.240095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.240247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.240258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.240428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.240441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.240642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.240654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.240806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.240818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.240891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.240902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.241103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.241115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.241258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.241270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.241498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.241510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.241674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.241686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.241822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.241835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.242085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.242097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.242255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.242267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.242347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.242359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.242498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.242510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.242598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.242610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.242758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.242771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.242859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.242869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.243005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.243017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.243094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.243104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.243246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.243258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.243400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.243412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.243571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.243583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.243665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.243675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.243809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.243821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.243972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.243984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.244125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.244137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.244301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.244313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.244399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.244410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.244586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.244597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.244755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.244766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.244860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.244872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.244953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.244964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.245099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.245111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.245221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.245233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.245437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.245449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.245530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.245541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.245674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.245686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.245754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.245766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.245836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.245846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.245974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.245987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.246060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.246071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.246224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.246236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.246371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.246389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.246592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.246604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.246751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.246763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.246834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.246844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.246958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.246969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.247104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.247116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.247232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.247244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.247317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.247327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.247462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.247475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.247566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.247579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.247715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.247726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.247932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.247944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.248013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.431 [2024-07-12 14:32:49.248023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.431 qpair failed and we were unable to recover it. 00:27:57.431 [2024-07-12 14:32:49.248156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.248167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.248315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.248327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.248530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.248542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.248676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.248687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.248815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.248827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.248997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.249009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.249146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.249158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.249236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.249247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.249407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.249419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.249571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.249582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.249729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.249741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.249903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.249915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.250131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.250143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.250232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.250242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.250392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.250405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.250536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.250548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.250693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.250704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.250772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.250783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.250937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.250948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.251028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.251039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.251108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.251118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.251183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.251193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.251344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.251358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.251452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.251463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.251601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.251613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.251759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.251770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.251906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.251918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.252062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.252074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.252212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.252224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.252306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.252318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.252424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.252435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.252522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.252534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.252697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.252709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.252778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.252795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.252948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.252959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.253031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.253041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.253174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.253186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.253335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.253347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.253485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.253497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.253582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.253594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.253684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.253696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.253833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.253845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.253913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.253923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.254008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.254020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.254088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.254098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.254264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.254275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.254349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.254360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.254472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.254483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.254619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.254631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.254793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.254804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.254884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.254896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.254963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.254973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.255042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.255052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.255185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.255197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.255289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.255300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.255395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.255407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.255472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.255483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.255619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.432 [2024-07-12 14:32:49.255630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.432 qpair failed and we were unable to recover it. 00:27:57.432 [2024-07-12 14:32:49.255777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.255788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.255925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.255936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.256091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.256103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.256186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.256196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.256267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.256281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.256454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.256466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.256551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.256563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.256764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.256776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.256832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.256842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.256911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.256921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.257074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.257086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.257157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.257167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.257338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.257349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.257440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.257452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.257552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.257564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.257739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.257750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.257896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.257907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.258076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.258087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.258159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.258170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.258253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.258263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.258349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.258360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.258591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.258603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.258735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.258747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.258893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.258905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.258986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.258998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.259096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.259108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.259173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.259183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.259261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.259273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.259407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.259419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.259554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.259566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.259700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.259711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.259873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.259885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.259962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.259974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.260044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.260055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.260149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.260160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.260285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.260297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.260384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.260397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.260466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.260477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.260550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.260560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.260700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.260712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.260889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.260900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.261057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.261068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.261233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.261245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.261297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.261307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.261400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.261413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.261546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.261557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.261651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.261663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.261803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.261815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.261904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.261915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.261997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.262009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.262152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.262163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.262243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.262254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.262480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.262493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.262591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.262603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.262695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.262706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.262775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.262786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.262924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.262935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.263004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.263015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.263157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.263169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.263369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.263392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.263538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.263550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.263713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.263725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.263803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.263816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.433 [2024-07-12 14:32:49.263962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.433 [2024-07-12 14:32:49.263974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.433 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.264110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.264121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.264256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.264268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.264420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.264431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.264512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.264523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.264659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.264670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.264806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.264818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.264887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.264898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.265057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.265069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.265270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.265281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.265381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.265393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.265485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.265496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.265592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.265603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.265741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.265752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.265842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.265854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.266055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.266067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.266215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.266227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.266395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.266407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.266623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.266634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.266870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.266882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.267026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.267038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.267134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.267147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.267346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.267357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.267502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.267514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.267724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.267736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.267904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.267915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.268050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.268062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.268237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.268249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.268330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.268341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.268475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.268486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.268622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.268634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.268842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.268853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.268923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.268933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.269112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.269123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.269273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.269285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.269371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.269387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.269472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.269484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.269678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.269689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.269824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.269835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.269927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.269939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.270093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.270104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.270180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.270191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.270270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.270282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.270435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.270447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.270527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.270538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.270607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.270618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.270710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.270720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.270803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.270813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.270894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.270914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.271144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.271160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.271342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.271357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.271540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.271556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.271697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.271712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.271856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.271871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.272027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.272043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.272234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.272248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.272355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.272371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.272545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.272561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.272716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.272731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.272899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.272914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.273101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.273116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.273371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.273396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.434 qpair failed and we were unable to recover it. 00:27:57.434 [2024-07-12 14:32:49.273622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.434 [2024-07-12 14:32:49.273637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.273868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.273883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.274045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.274060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.274240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.274256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.274502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.274520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.274692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.274707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.274943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.274959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.275192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.275208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.275354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.275369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.275466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.275482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.275731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.275747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.275899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.275914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.276075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.276090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.276245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.276260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.276519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.276534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.276774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.276789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.276969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.276985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.277149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.277165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.277304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.277319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.277342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1226000 (9): Bad file descriptor 00:27:57.435 [2024-07-12 14:32:49.277670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.277695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.277918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.277934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.278159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.278174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.278390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.278405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.278631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.278646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.278872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.278888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.279043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.279058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.279191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.279206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.279344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.279359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.279583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.279599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.279821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.279836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.279934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.279949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.280108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.280123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.280271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.280286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.280431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.280447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.280614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.280629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.280900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.280916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.281088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.281103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.281214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.281228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.281466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.281482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.281699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.281721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.281865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.281880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.282042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.282057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.282292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.282306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.282451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.282463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.282614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.282626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.282723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.282735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.282888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.282900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.283101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.283112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.283253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.283264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.283477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.283488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.283688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.283700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.283903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.283915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.284098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.284109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.284346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.284358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.284527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.284539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.284739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.284750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.285002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.285014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.285195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.285207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.285363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.285374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.285550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.285562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.285793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.285804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.285967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.285978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.286182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.286194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.286340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.286352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.286523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.286535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.286672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.286685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.286869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.286889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.287126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.287142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.287235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.287254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.287461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.287475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.287623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.287634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.287770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.287782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.287932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.287943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.288078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.288089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.288194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.288205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.435 [2024-07-12 14:32:49.288383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.435 [2024-07-12 14:32:49.288395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.435 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.288633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.288645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.288891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.288903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.289118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.289130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.289300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.289313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.289532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.289544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.289814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.289826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.289978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.289990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.290203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.290214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.290359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.290372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.290577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.290589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.290814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.290825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.290931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.290944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.291087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.291099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.291167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.291177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.291426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.291439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.291610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.291622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.291816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.291827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.292083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.292095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.292181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.292192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.292390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.292402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.292607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.292619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.292758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.292770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.292903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.292915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.292986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.292996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.293221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.293233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.293433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.293446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.293581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.293592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.293695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.293705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.293937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.293950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.294122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.294134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.294290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.294302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.294527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.294538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.294754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.294766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.294928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.294940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.295078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.295089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.295224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.295236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.295455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.295467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.295555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.295566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.295664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.295674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.295854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.295866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.296093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.296105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.296195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.296205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.296311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.296321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.296472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.296486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.296690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.296702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.296860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.296872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.296974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.296986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.297189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.297201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.297357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.297369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.297630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.297641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.297845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.297857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.298013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.298025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.298182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.298194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.298289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.298299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.298449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.298461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.298555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.298565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.298661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.298672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.298812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.298824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.299026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.299038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.299169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.299181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.299339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.299351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.299446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.299457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.299661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.299673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.299846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.299857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.300037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.300049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.300283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.300295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.300448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.300461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.300599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.300611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.436 [2024-07-12 14:32:49.300714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.436 [2024-07-12 14:32:49.300725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.436 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.300837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.300849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.300949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.300959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.301189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.301201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.301284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.301295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.301439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.301451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.301559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.301571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.301722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.301734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.301955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.301967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.302119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.302131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.302215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.302225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.302402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.302414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.302567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.302578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.302855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.302867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.303058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.303069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.303319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.303332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.303470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.303483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.303707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.303724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.303949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.303961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.304228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.304240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.304443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.304454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.304605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.304617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.304783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.304795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.304939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.304951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.305222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.305234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.305415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.305426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.305598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.305610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.305784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.305796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.305950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.305962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.306123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.306134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.306358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.306370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.306546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.306558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.306732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.306744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.306961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.306973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.307203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.307214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.307417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.307429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.307567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.307579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.307763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.307775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.307930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.307942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.308191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.308203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.308408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.308420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.308580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.308592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.308769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.308780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.308878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.308889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.309093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.309105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.309250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.309262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.309438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.309449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.309542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.309553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.309754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.309766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.309865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.309876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.310127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.310138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.310317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.310328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.310505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.310517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.310751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.310763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.310962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.310974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.311129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.311141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.311236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.311246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.311401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.311414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.311655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.311667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.311812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.311823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.311910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.311920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.312097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.312109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.312308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.312325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.312508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.312520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.312742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.312754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.312932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.312944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.313161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.313173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.313400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.313412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.313555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.313567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.313791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.313803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.313941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.313952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.314136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.314148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.314309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.314321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.314497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.314509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.314685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.314696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.314896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.314908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.315053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.315065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.437 [2024-07-12 14:32:49.315282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.437 [2024-07-12 14:32:49.315294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.437 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.315519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.315531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.315722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.315734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.315828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.315838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.315991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.316003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.316228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.316242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.316410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.316422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.316641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.316653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.316861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.316873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.317148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.317160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.317299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.317309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.317445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.317456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.317541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.317552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.317684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.317694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.317777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.317788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.317959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.317971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.318133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.318144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.318297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.318309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.318533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.318545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.318758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.318770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.318926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.318938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.319093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.319105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.319357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.319368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.319537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.319548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.319774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.319786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.319995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.320007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.320153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.320165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.320322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.320334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.320480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.320492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.320585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.320596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.320740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.320753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.320976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.320988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.321133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.321145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.321302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.321314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.321470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.321482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.321724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.321736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.321939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.321951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.322103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.322116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.322297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.322309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.322513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.322525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.322691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.322703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.322856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.322868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.323069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.323081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.323323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.323335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.323480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.323492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.323663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.323677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.323893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.323905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.324047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.324059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.324308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.324320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.324405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.324416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.324644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.324656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.324828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.324840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.324939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.324950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.325158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.325170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.325369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.325385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.325457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.325469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.325630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.325642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.325787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.325799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.325954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.325965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.326204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.326215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.326397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.326409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.326646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.326658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.326907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.326918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.327088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.327100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.327303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.327315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.327489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.327501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.327660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.327672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.327873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.327885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.328057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.328068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.328296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.328308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.328549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.438 [2024-07-12 14:32:49.328561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.438 qpair failed and we were unable to recover it. 00:27:57.438 [2024-07-12 14:32:49.328724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.328736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.328987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.329018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.329217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.329248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.329508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.329540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.329730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.329761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.330029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.330059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.330351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.330391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.330665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.330696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.330970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.331000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.331258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.331289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.331556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.331596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.331820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.331833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.332004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.332016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.332190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.332221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.332407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.332445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.332645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.332675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.332920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.332950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.333215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.333255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.333343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.333354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.333560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.333592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.333770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.333800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.333980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.334011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.334304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.334337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.334483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.334494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.334671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.334702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.334851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.334880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.335091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.335122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.335327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.335358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.335640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.335652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.335840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.335870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.336065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.336096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.336333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.336364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.336631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.336664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.336958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.336989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.337267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.337298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.337510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.337543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.337739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.337769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.337962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.337993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.338193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.338224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.338488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.338518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.338773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.338804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.339008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.339039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.339253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.339283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.339547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.339578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.339842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.339854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.339956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.339966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.340111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.340138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.340262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.340291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.340493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.340524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.340643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.340673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.340861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.340874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.341038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.341069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.341266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.341297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.341594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.341626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.341904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.341940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.342212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.342242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.342447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.342479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.342728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.342766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.342902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.342915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.343099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.343129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.343375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.343415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.343680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.343710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.343917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.343947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.344216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.344253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.344460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.344472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.344672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.344685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.344911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.344923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.345124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.345136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.345362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.345374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.345587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.345618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.345838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.345868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.346004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.346035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.346300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.346339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.346583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.346596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.346827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.346838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.347004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.347016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.439 [2024-07-12 14:32:49.347183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.439 [2024-07-12 14:32:49.347194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.439 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.347358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.347397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.347606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.347636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.347840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.347870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.348051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.348082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.348356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.348408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.348612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.348624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.348851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.348863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.349017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.349047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.349312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.349352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.349436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.349446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.349633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.349645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.349875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.349905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.350177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.350207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.350505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.350537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.350762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.350793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.351065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.351096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.351273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.351305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.351495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.351509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.351732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.351764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.351969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.351999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.352239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.352271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.352493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.352545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.352812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.352823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.353041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.353053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.353257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.353269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.353421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.353433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.353714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.353745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.353988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.354019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.354203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.354234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.354426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.354438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.354667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.354698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.354899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.354931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.355189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.355219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.355497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.355534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.355790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.355820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.356009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.356041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.356285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.356316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.356556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.356588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.356772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.356785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.356949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.356980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.357201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.357232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.357470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.357482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.357713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.357745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.357939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.357970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.358182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.358213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.358473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.358485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.358737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.358768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.358982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.359013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.359257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.359288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.359488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.359500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.359653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.359684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.359955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.359985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.360182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.360213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.360469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.360509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.360733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.360745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.360835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.360845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.361101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.361131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.361327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.361363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.361661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.361692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.361965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.361995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.362258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.362288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.362534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.362565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.362759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.362790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.362968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.362999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.363128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.363159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.363424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.363455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.363699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.363730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.363950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.363981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.364267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.364298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.364570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.364602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.364922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.364953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.365188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.365219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.365525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.365557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.365822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.365852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.366034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.366065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.366283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.366314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.366537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.366569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.366809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.440 [2024-07-12 14:32:49.366821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.440 qpair failed and we were unable to recover it. 00:27:57.440 [2024-07-12 14:32:49.367047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.367079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.367204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.367235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.367430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.367462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.367733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.367764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.368008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.368039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.368281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.368312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.368548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.368579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.368840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.368851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.369059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.369090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.369278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.369309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.369505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.369537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.369807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.369838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.370103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.370134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.370324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.370354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.370559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.370571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.370782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.370813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.371084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.371114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.371291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.371322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.371595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.371607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.371828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.371865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.372004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.372035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.372304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.372335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.372539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.372551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.372730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.372743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.372903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.372915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.373051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.373063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.373272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.373303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.373483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.373517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.373721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.373752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.373953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.373984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.374297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.374328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.374588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.374620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.374839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.374869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.375125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.375156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.375370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.375410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.375614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.375644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.375818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.375848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.376035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.376065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.376264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.376295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.376423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.376435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.376597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.376609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.376816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.376848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.377156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.377186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.377298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.377329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.377619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.377652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.377824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.377836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.377979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.378011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.378246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.378277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.378470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.378503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.378762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.378793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.379061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.379092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.379300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.379330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.379583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.379615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.379798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.379829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.380041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.380072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.380200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.380232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.380506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.441 [2024-07-12 14:32:49.380538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.441 qpair failed and we were unable to recover it. 00:27:57.441 [2024-07-12 14:32:49.380733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.380764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.381017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.381048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.381319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.381354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.381573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.381605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.381815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.381845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.382113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.382144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.382434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.382467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.382710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.382741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.383003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.383034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.383329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.383360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.383553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.383584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.383878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.383908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.384174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.384205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.384408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.384440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.384590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.384602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.384817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.384848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.384994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.385026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.385292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.385323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.385584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.385596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.385750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.385780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.385980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.386011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.386188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.386219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.386476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.386509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.386700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.386712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.386809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.386819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.386974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.386986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.387176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.387206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.387327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.387359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.387616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.387648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.387954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.387985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.388254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.388285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.388529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.388561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.388736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.388748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.388926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.388957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.389204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.389234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.389500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.389532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.389828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.389858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.390038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.390069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.390319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.390350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.390581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.390594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.390737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.390749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.390976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.391008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.391214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.391250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.442 qpair failed and we were unable to recover it. 00:27:57.442 [2024-07-12 14:32:49.391458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.442 [2024-07-12 14:32:49.391490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.391679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.391691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.391925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.391956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.392221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.392252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.392543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.392555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.392641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.392652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.392924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.392954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.393260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.393291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.393560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.393592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.393816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.393855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.393941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.393951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.394155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.394187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.394392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.394424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.394564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.394596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.394868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.394899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.395192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.395223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.395441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.395473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.395683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.395714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.395890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.395921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.396118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.396149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.396423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.396456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.396743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.396773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.397050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.397079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.397386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.397418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.397689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.397720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.397918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.397950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.398164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.398194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.398464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.398496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.398754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.398767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.398995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.399008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.399238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.399250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.399469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.399482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.399581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.399592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.399823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.399854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.400059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.400090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.400216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.400248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.400440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.400453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.400589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.400600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.400815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.400827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.401024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.443 [2024-07-12 14:32:49.401039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.443 qpair failed and we were unable to recover it. 00:27:57.443 [2024-07-12 14:32:49.401195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.401207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.401442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.401454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.401610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.401622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.401827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.401838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.401914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.401924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.402200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.402231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.402432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.402464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.402713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.402725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.402950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.402962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.403138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.403149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.403252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.403283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.403553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.403585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.403763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.403794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.403988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.404019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.404264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.404295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.404564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.404597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.404781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.404812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.405077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.405089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.405292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.405304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.405387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.405399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.405487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.405498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.405722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.405752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.405892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.405923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.406182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.406213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.406483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.406515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.406807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.406838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.407076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.407152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.444 [2024-07-12 14:32:49.407499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.444 [2024-07-12 14:32:49.407568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.444 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.407840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.407858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.408040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.408056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.408212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.408228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.408485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.408502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.408603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.408618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.408826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.408841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.409013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.409030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.409265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.409280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.409458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.409475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.409622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.409638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.409878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.409894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.410125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.410146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.410252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.410266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.410437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.410453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.410664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.410680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.410854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.410870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.411017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.411033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.411179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.411195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.411341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.411358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.411575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.411592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.411803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.411819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.728 qpair failed and we were unable to recover it. 00:27:57.728 [2024-07-12 14:32:49.412030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.728 [2024-07-12 14:32:49.412045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.412289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.412304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.412462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.412479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.412641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.412657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.412839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.412855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.413067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.413083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.413266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.413282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.413485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.413501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.413583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.413599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.413767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.413799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.414068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.414100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.414394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.414426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.414675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.414707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.414976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.415007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.415256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.415287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.415420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.415452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.415697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.415728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.415958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.416000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.416281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.416313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.416585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.416621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.416922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.416955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.417217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.417248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.417465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.417481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.417721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.417753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.418016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.418047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.418301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.418333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.418612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.418645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.418923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.418955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.419242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.419273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.419479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.419510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.419754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.419799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.419950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.419965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.420224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.420255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.420528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.420560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.729 qpair failed and we were unable to recover it. 00:27:57.729 [2024-07-12 14:32:49.420850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.729 [2024-07-12 14:32:49.420881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.421152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.421183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.421395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.421427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.421610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.421642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.421822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.421853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.422132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.422163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.422355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.422393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.422652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.422684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.422968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.422999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.423190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.423221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.423494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.423526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.423765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.423796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.423983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.424014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.424261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.424292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.424496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.424529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.424674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.424706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.424916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.424947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.425213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.425244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.425440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.425473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.425655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.425666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.425851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.425883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.426150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.426181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.426453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.426485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.426691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.426723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.426995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.427026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.427297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.427329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.427554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.427586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.427733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.427763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.428017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.428029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.428185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.428224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.428488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.428521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.428815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.428846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.429038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.429069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.429219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.429249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.429448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.429481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.730 [2024-07-12 14:32:49.429751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.730 [2024-07-12 14:32:49.429781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.730 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.429910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.429924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.430091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.430135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.430327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.430358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.430611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.430623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.430849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.430862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.430988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.430999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.431162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.431193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.431395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.431427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.431637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.431668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.431926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.431938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.432091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.432103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.432329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.432342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.432504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.432517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.432723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.432735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.432904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.432915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.433066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.433078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.433216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.433227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.433391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.433404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.433642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.433655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.433858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.433870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.434096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.434108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.434264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.434276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.434493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.434505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.434659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.434672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.434821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.434832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.435059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.435071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.435234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.435245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.435485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.435499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.435649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.435661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.435740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.435750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.435889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.435899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.436073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.436085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.436247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.436259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.436461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.436474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.436677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.436689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.436875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.436887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.437121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.437133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.437361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.437373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.437616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.437628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.437865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.437877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.438079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.731 [2024-07-12 14:32:49.438091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.731 qpair failed and we were unable to recover it. 00:27:57.731 [2024-07-12 14:32:49.438321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.438333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.438438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.438448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.438594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.438606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.438806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.438818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.438952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.438965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.439170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.439183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.439393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.439405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.439631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.439643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.439869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.439881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.440039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.440052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.440204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.440216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.440361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.440373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.440579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.440591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.440796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.440808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.441012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.441024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.441256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.441268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.441480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.441493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.441695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.441707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.441910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.441922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.442087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.442099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.442347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.442359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.442616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.442628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.442882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.442894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.443106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.443118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.443362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.443374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.443551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.443564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.443713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.443727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.443929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.443941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.444086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.444098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.444331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.444343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.444506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.444517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.444713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.444726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.444951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.444963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.445253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.445265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.445471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.732 [2024-07-12 14:32:49.445484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.732 qpair failed and we were unable to recover it. 00:27:57.732 [2024-07-12 14:32:49.445710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.445722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.445815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.445825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.446032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.446044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.446229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.446241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.446457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.446470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.446612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.446625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.446829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.446841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.446996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.447008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.447221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.447232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.447461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.447474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.447574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.447585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.447808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.447819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.448028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.448040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.448262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.448274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.448481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.448493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.448784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.448796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.449018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.449030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.449258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.449270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.449441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.449453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.449656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.449668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.449894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.449906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.450073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.450085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.450174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.450185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.450339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.450351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.450581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.450593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.450726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.450738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.450985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.450998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.451214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.451226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.451474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.451486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.451648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.451660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.451836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.451849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.452032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.452046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.452137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.452148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.452301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.452313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.452541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.733 [2024-07-12 14:32:49.452553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.733 qpair failed and we were unable to recover it. 00:27:57.733 [2024-07-12 14:32:49.452771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.452783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.452936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.452948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.453107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.453118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.453321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.453333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.453521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.453534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.453650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.453662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.453804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.453816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.453985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.453997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.454228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.454240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.454387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.454400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.454575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.454587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.454816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.454829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.454979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.454991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.455085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.455095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.455320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.455332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.455469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.455481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.455738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.455750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.455923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.455935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.456157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.456169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.456284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.456295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.456450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.456469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.456603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.456615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.456770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.456783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.456959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.456971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.457122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.457135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.457296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.457308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.457461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.457473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.457711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.457723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.457900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.457912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.458150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.458162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.458311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.458323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.458473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.458485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.458717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.458729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.458863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.458875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.459105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.459117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.459323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.459334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.459421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.459434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.459571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.459583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.459674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.459685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.459839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.459851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.459986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.459998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.734 [2024-07-12 14:32:49.460139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.734 [2024-07-12 14:32:49.460152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.734 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.460289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.460300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.460440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.460450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.460600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.460612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.460876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.460888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.461054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.461065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.461351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.461363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.461569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.461581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.461662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.461673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.461813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.461824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.461969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.461981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.462131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.462143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.462371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.462388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.462476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.462487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.462553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.462563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.462715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.462726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.462955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.462967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.463124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.463136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.463314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.463326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.463424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.463435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.463586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.463598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.463766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.463778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.463913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.463925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.464150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.464162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.464355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.464366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.464517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.464529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.464737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.464749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.464836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.464847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.464982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.464994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.465268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.465280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.465428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.465440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.465665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.465677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.465879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.465890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.466090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.466102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.466260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.466271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.466520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.466535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.466740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.466752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.466849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.466860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.466938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.466949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.467099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.467111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.467339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.467351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.467566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.467578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.467843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.735 [2024-07-12 14:32:49.467854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.735 qpair failed and we were unable to recover it. 00:27:57.735 [2024-07-12 14:32:49.468029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.468041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.468240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.468251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.468477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.468489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.468627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.468639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.468798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.468811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.469016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.469028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.469233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.469244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.469408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.469420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.469669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.469680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.469838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.469850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.470050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.470062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.470211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.470224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.470358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.470369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.470594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.470606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.470807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.470819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.470906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.470917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.471142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.471153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.471294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.471306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.471457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.471469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.471570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.471580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.471807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.471819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.471984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.471995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.472228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.472239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.472405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.472417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.472644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.472656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.472726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.472737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.472881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.472893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.473098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.473111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.473327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.473339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.473570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.473582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.473673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.473683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.473855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.473867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.474085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.474099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.474246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.474258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.474404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.474416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.474498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.474509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.474592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.474603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.474698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.474709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.474853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.474865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.475013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.475025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.475114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.475124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.475203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.475213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.475360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.475373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.475604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.475616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.475818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.475830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.736 [2024-07-12 14:32:49.475980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.736 [2024-07-12 14:32:49.475992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.736 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.476093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.476104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.476176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.476186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.476408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.476420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.476648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.476659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.476804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.476816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.476977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.476988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.477134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.477146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.477353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.477365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.477529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.477541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.477694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.477706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.477844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.477856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.478008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.478020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.478162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.478173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.478310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.478322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.478504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.478516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.478670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.478682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.478931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.478943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.479160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.479171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.479416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.479428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.479628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.479641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.479848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.479860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.480049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.480061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.480269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.480281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.480412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.480424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.480662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.480674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.480879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.480891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.481061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.481075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.481246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.481258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.481415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.481428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.481585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.481597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.481903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.481915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.482156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.482167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.482319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.482331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.482540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.482551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.482705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.482716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.482850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.482862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.483037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.483049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.483253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.483265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.483470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.483481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.483626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.737 [2024-07-12 14:32:49.483638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.737 qpair failed and we were unable to recover it. 00:27:57.737 [2024-07-12 14:32:49.483821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.483832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.484034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.484046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.484203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.484215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.484445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.484458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.484614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.484626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.484716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.484726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.484862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.484873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.485074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.485086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.485226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.485238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.485398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.485410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.485575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.485587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.485835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.485846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.486027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.486039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.486282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.486294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.486516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.486528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.486739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.486751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.486965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.486976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.487178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.487190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.487415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.487428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.487580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.487592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.487738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.487750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.487889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.487900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.488100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.488112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.488200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.488210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.488422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.488435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.488641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.488654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.488827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.488841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.488912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.488923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.489090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.489102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.489252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.489264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.489434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.489446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.489679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.489691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.489919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.489931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.490083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.490094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.490192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.490203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.490369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.490388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.490592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.490601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.490801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.490810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.491032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.491041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.491266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.491274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.491497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.491506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.491734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.491743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.491843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.491852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.491984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.491993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.492220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.492229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.492482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.492491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.738 qpair failed and we were unable to recover it. 00:27:57.738 [2024-07-12 14:32:49.492646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.738 [2024-07-12 14:32:49.492655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.492726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.492735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.492888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.492897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.493123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.493133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.493329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.493340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.493514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.493525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.493694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.493705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.493876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.493886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.494109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.494120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.494291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.494301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.494435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.494446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.494554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.494565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.494794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.494804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.494959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.494970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.495053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.495062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.495286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.495296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.495439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.495450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.495532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.495543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.495764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.495775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.495977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.495988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.496074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.496086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.496262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.496272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.496503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.496514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.496658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.496669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.496836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.496846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.497019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.497030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.497124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.497135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.497233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.497243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.497387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.497399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.497550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.497562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.497738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.497749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.497899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.497911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.498077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.498088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.498310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.498322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.498491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.498503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.498640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.498652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.498880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.498891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.499037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.499049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.499153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.499163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.499264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.499281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.499480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.499492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.499708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.499720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.739 [2024-07-12 14:32:49.499900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.739 [2024-07-12 14:32:49.499911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.739 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.500008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.500019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.500248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.500259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.500464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.500476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.500546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.500557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.500761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.500772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.501003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.501015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.501148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.501159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.501330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.501341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.501573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.501585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.501722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.501733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.501897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.501909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.502081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.502092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.502245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.502256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.502403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.502415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.502492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.502504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.502578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.502588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.502740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.502752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.502842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.502855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.503065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.503076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.503310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.503322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.503549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.503561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.503831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.503843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.504067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.504079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.504217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.504228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.504388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.504400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.504539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.504551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.504804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.504816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.505021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.505033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.505122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.505132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.505352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.505364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.505545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.505556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.505654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.505665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.505876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.505888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.506023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.506036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.506287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.506299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.506462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.506474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.506625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.506637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.506859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.506871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.507019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.507031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.507262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.507274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.507496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.507508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.507658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.507670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.507770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.507787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.508016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.740 [2024-07-12 14:32:49.508027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.740 qpair failed and we were unable to recover it. 00:27:57.740 [2024-07-12 14:32:49.508176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.508211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.508499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.508533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.508809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.508826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.508924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.508939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.509101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.509116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.509325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.509342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.509575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.509591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.509764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.509780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.509920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.509935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.510091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.510106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.510339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.510355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.510507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.510522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.510665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.510680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.510836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.510855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.511025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.511041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.511280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.511295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.511444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.511460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.511695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.511710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.511822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.511837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.512060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.512076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.512237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.512252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.512452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.512484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.512752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.512783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.513028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.513059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.513271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.513302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.513515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.513547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.513679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.513710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.513981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.514013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.514285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.514316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.514608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.514640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.514790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.514822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.515111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.515148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.515404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.515436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.515683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.515715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.515899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.515929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.516224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.516255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.516464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.516496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.516712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.516757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.516903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.516918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.517119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.517150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.517437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.741 [2024-07-12 14:32:49.517469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.741 qpair failed and we were unable to recover it. 00:27:57.741 [2024-07-12 14:32:49.517718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.517750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.517926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.517941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.518151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.518168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.518349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.518364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.518627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.518661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.518929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.518960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.519221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.519252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.519404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.519437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.519629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.519662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.519858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.519889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.520113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.520145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.520338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.520369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.520574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.520611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.520781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.520792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.520961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.520992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.521233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.521265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.521475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.521507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.521691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.521721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.521966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.521977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.522137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.522167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.522359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.522400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.522653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.522685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.522873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.522904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.523148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.523179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.523370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.523411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.523658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.523689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.523904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.523935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.524120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.524132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.524288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.524318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.524605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.524637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.524886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.524917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.525112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.525144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.525398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.525431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.525675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.525707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.525914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.525926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.526076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.526088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.526263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.526294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.526453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.526485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.526632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.526663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.526898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.526968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.527137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.527172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.527455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.527507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.527651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.527682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.527903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.527934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.528126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.528157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.528346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.528388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.528674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.528705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.528924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.528956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.742 [2024-07-12 14:32:49.529193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.742 [2024-07-12 14:32:49.529209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.742 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.529365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.529386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.529503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.529534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.529751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.529782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.530115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.530146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.530408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.530441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.530732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.530762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.531010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.531041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.531333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.531363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.531641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.531673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.531807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.531839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.531979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.531994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.532137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.532153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.532410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.532427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.532648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.532665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.532748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.532762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.532949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.532965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.533120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.533136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.533255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.533293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.533476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.533508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.533702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.533733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.533930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.533961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.534232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.534264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.534563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.534594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.534782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.534821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.535044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.535059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.535168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.535184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.535416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.535448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.535590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.535622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.535746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.535778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.535974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.536005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.536218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.536250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.536504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.536537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.536736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.536766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.536964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.536995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.537238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.537254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.537408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.537425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.537597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.537641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.537782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.537813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.538106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.538148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.538328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.538344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.538511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.538527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.538689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.538705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.538807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.538824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.539077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.743 [2024-07-12 14:32:49.539108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.743 qpair failed and we were unable to recover it. 00:27:57.743 [2024-07-12 14:32:49.539354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.539407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.539627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.539658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.539905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.539936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.540150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.540182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.540451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.540484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.540673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.540704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.540885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.540902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.541143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.541174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.541301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.541333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.541632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.541665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.541924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.541940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.542200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.542233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.542542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.542574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.542771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.542803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.543057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.543089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.543333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.543364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.543624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.543657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.543792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.543823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.544064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.544079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.544256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.544288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.544560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.544593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.544857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.544873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.545128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.545143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.545304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.545320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.545488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.545520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.545661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.545691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.545870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.545901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.546188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.546221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.546443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.546476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.546748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.546787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.547021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.547037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.547259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.547275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.547458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.547474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.547656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.547687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.547968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.547999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.548275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.548291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.548401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.744 [2024-07-12 14:32:49.548418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.744 qpair failed and we were unable to recover it. 00:27:57.744 [2024-07-12 14:32:49.548636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.548667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.548871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.548902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.549107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.549138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.549326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.549357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.549620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.549652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.549925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.549956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.550147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.550178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.550361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.550402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.550674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.550704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.550893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.550908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.551094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.551126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.551308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.551339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.551580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.551613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.551906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.551937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.552209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.552240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.552533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.552566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.552764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.552795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.553092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.553123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.553403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.553434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.553613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.553643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.553887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.553918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.554179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.554210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.554460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.554492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.554673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.554703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.554950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.554981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.555224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.555255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.555514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.555546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.555731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.555762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.556034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.556065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.556334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.556366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.556627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.556660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.556909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.556945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.557131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.557162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.557432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.557465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.557660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.557690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.557950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.557982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.558229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.558245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.558407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.558423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.558662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.558693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.558844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.558875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.559174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.559206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.559330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.559362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.559617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.559648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.559943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.559959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.745 qpair failed and we were unable to recover it. 00:27:57.745 [2024-07-12 14:32:49.560115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.745 [2024-07-12 14:32:49.560131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.560348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.560386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.560651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.560684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.560888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.560904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.561140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.561156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.561323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.561354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.561551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.561584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.561784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.561814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.562025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.562056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.562293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.562308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.562523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.562540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.562685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.562702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.562945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.562977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.563246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.563277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.563564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.563583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.563816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.563831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.564061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.564077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.564159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.564174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.564283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.564299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.564531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.564547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.564724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.564739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.564940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.564970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.565215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.565247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.565451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.565483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.565684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.565715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.565960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.565976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.566131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.566163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.566358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.566402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.566681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.566712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.566889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.566904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.567071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.567102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.567319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.567351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.567628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.567660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.567801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.567832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.568039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.568055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.568269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.568300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.568429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.568462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.568585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.568615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.568862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.568893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.569104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.569136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.569352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.569368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.569614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.569652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.569899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.569929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.570114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.570129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.746 [2024-07-12 14:32:49.570349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.746 [2024-07-12 14:32:49.570389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.746 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.570576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.570607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.570882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.570914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.571188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.571219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.571345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.571374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.571575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.571606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.571911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.571926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.572186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.572202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.572430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.572447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.572681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.572696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.572953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.572992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.573203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.573234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.573426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.573460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.573732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.573763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.574056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.574098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.574304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.574339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.574557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.574590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.574772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.574804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.575094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.575125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.575373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.575430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.575572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.575603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.575879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.575910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.576189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.576220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.576438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.576471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.576720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.576751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.577024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.577039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.577280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.577296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.577537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.577553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.577717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.577733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.577970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.578002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.578191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.578222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.578495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.578527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.578732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.578763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.579034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.579065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.579360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.579384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.579601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.579617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.579856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.579872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.580043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.580059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.580275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.580291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.580537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.580569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.580768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.580799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.581086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.581102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.581264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.581280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.747 [2024-07-12 14:32:49.581473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.747 [2024-07-12 14:32:49.581505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.747 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.581722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.581753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.582010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.582041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.582286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.582318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.582522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.582554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.582848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.582879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.583155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.583186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.583455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.583487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.583715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.583748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.583979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.584010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.584175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.584191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.584372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.584394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.584561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.584577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.584809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.584825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.584973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.584988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.585152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.585168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.585386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.585403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.585570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.585586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.585818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.585849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.585972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.586003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.586301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.586333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.586583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.586600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.586741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.586760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.586884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.586915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.587183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.587214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.587343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.587374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.587601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.587633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.587928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.587959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.588145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.588177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.588357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.588399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.588618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.588650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.588920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.588951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.589198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.589241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.589391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.589408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.589581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.589612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.589821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.589853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.590047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.590063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.590212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.590228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.590453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.590485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.590747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.748 [2024-07-12 14:32:49.590779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.748 qpair failed and we were unable to recover it. 00:27:57.748 [2024-07-12 14:32:49.590914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.590945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.591167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.591198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.591407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.591440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.591689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.591720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.591988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.592019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.592266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.592298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.592507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.592539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.592669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.592701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.592946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.592978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.593224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.593261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.593478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.593511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.593694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.593725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.593923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.593955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.594203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.594219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.594397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.594413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.594668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.594698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.594947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.594979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.595253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.595293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.595489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.595506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.595658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.595674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.595928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.595945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.596039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.596054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.596154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.596169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.596353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.596369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.596615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.596632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.596814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.596830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.597063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.597079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.597189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.597221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.597465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.597498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.597695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.597727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.597980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.598012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.598243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.598259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.598502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.598535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.598793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.598824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.599076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.599107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.599299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.599332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.599614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.599646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.599927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.599959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.600231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.600263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.600458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.600491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.600769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.600801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.600993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.749 [2024-07-12 14:32:49.601024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.749 qpair failed and we were unable to recover it. 00:27:57.749 [2024-07-12 14:32:49.601231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.601262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.601509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.601542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.601815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.601846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.601983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.602015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.602266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.602297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.602545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.602578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.602766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.602798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.602995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.603026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.603318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.603399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.603652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.603722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.604014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.604032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.604209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.604225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.604334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.604367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.604647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.604680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.604938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.604970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.605264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.605295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.605498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.605531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.605731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.605763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.606010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.606042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.606244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.606260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.606349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.606386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.606604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.606645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.606927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.606959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.607274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.607305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.607565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.607599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.607805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.607836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.608092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.608123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.608395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.608413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.608582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.608598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.608816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.608848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.609039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.609071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.609341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.609373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.609646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.609678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.609978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.610010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.610276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.610308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.610434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.610467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.610691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.610723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.610970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.611001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.611192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.611223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.611416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.611450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.611726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.611757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.611953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.750 [2024-07-12 14:32:49.611985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.750 qpair failed and we were unable to recover it. 00:27:57.750 [2024-07-12 14:32:49.612203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.612234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.612504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.612537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.612836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.612868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.613138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.613170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.613464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.613497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.613775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.613806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.614118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.614158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.614392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.614405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.614642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.614673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.614863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.614894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.615158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.615189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.615480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.615512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.615762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.615793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.616107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.616138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.616329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.616340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.616555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.616587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.616859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.616891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.617092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.617122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.617371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.617410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.617677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.617717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.617996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.618028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.618281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.618313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.618563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.618595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.618810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.618842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.619056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.619087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.619382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.619395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.619570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.619601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.619876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.619908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.620154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.620166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.620397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.620429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.620573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.620605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.620784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.620814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.621142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.621172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.621353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.621394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.621587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.621618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.621888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.621920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.622205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.622236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.622394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.622407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.622623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.622654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.622926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.622956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.623150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.623192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.623426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.623438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.623673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.623685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.623852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.623864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.624049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.624061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.624214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.751 [2024-07-12 14:32:49.624245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.751 qpair failed and we were unable to recover it. 00:27:57.751 [2024-07-12 14:32:49.624509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.624543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.624837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.624868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.625144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.625175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.625351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.625363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.625601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.625634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.625826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.625857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.626069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.626100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.626300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.626332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.626610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.626643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.626848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.626878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.627142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.627174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.627407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.627420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.627568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.627580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.627816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.627853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.628101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.628131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.628384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.628397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.628541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.628574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.628847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.628878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.629076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.629107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.629386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.629419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.629686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.629717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.629873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.629906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.630177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.630189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.630357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.630396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.630588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.630619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.630868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.630899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.631145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.631176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.631455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.631488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.631691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.631723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.631972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.632002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.632235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.632266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.632543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.632576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.632878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.632909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.633198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.633210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.633406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.633438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.633631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.633662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.633947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.633978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.634264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.634307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.634510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.634542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.752 qpair failed and we were unable to recover it. 00:27:57.752 [2024-07-12 14:32:49.634760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.752 [2024-07-12 14:32:49.634791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.635007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.635046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.635184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.635215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.635504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.635516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.635783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.635814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.636019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.636050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.636324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.636361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.636598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.636611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.636830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.636843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.636983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.636995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.637227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.637238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.637325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.637336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.637583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.637596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.637734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.637747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.637984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.638015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.638290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.638321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.638551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.638563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.638800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.638831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.639084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.639115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.639316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.639348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.639655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.639725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.639995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.640031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.640220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.640253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.640476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.640511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.640763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.640795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.641046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.641077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.641346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.641387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.641674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.641706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.641981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.642025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.642188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.642204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.642324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.642356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.642635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.642667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.642909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.642941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.643143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.643175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.643422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.643439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.643673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.643689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.643947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.643963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.644193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.644209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.644374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.644396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.753 qpair failed and we were unable to recover it. 00:27:57.753 [2024-07-12 14:32:49.644641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.753 [2024-07-12 14:32:49.644672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.644862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.644894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.645162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.645200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.645402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.645435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.645684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.645715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.645989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.646021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.646232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.646263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.646511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.646543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.646747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.646778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.646970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.647001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.647279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.647311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.647507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.647540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.647793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.647825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.648089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.648120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.648327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.648358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.648588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.648604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.648850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.648881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.649132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.649164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.649438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.649455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.649693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.649709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.649925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.649942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.650104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.650120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.650337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.650368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.650586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.650618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.650885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.650916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.651210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.651241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.651436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.651468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.651719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.651751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.651966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.651997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.652251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.652267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.652430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.652446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.652712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.652743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.652944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.652977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.653186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.653202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.653481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.653515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.653772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.653804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.654099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.654131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.654251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.654282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.654494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.654526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.654802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.654833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.655117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.655148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.655372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.655412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.655607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.655644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.655870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.655902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.656195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.656237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.656398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.656414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.656650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.754 [2024-07-12 14:32:49.656667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.754 qpair failed and we were unable to recover it. 00:27:57.754 [2024-07-12 14:32:49.656830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.656846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.657082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.657099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.657260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.657276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.657519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.657552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.657748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.657779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.657959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.657991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.658265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.658297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.658504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.658520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.658774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.658805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.658955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.658987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.659194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.659225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.659494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.659527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.659824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.659855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.660066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.660098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.660278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.660294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.660441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.660473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.660750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.660781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.661052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.661084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.661292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.661324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.661557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.661590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.661838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.661870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.662131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.662163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.662360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.662404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.662658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.662674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.662840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.662856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.663106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.663137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.663398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.663431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.663676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.663692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.663922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.663938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.664199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.664216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.664387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.664404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.664663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.664680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.664838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.664854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.665010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.665041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.665237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.665269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.665449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.665487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.665738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.665770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.665952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.665984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.666177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.666193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.666434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.666466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.666767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.666798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.666997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.667029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.755 qpair failed and we were unable to recover it. 00:27:57.755 [2024-07-12 14:32:49.667271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.755 [2024-07-12 14:32:49.667287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.667520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.667537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.667713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.667744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.667947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.667978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.668189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.668205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.668391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.668424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.668604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.668634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.668822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.668852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.669061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.669094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.669332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.669348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.669590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.669607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.669758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.669774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.669960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.669992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.670277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.670308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.670529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.670546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.670798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.670830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.671100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.671131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.671350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.671366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.671485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.671527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.671744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.671776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.672060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.672092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.672279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.672295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.672465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.672481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.672647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.672664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.672919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.672936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.673094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.673111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.673349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.673390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.673645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.673677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.673873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.673905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.674154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.674185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.674458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.674491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.674641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.674673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.674863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.674895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.675095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.675133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.675397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.675414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.675646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.675662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.675810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.675825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.676064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.676081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.676245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.676261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.676480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.676513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.676810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.676842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.756 [2024-07-12 14:32:49.677098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.756 [2024-07-12 14:32:49.677130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.756 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.677447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.677480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.677657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.677688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.677962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.677994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.678183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.678199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.678459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.678476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.678715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.678732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.678875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.678891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.679133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.679149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.679331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.679347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.679510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.679543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.679795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.679827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.680105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.680137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.680436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.680470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.680741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.680772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.680986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.681018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.681285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.681316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.681608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.681654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.681925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.681957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.682206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.682222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.682445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.682462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.682648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.682665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.682877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.682894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.683132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.683148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.683307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.683323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.683432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.683447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.683664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.683681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.683933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.683949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.684206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.684222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.684439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.684455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.684649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.684666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.684756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.684771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.684952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.684972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.685117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.685133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.685233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.685247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.685462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.685479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.685649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.685665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.685892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.685909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.686144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.686160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.686353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.686369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.686479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.757 [2024-07-12 14:32:49.686495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.757 qpair failed and we were unable to recover it. 00:27:57.757 [2024-07-12 14:32:49.686702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.686719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.686872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.686888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.686977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.686992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.687252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.687268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.687503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.687520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.687690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.687708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.687922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.687938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.688096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.688112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.688272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.688288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.688438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.688454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.688638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.688653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.688813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.688830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.689074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.689090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.689373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.689394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.689611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.689627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.689796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.689813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.690054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.690070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.690229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.690245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.690337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.690352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.690622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.690639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.690796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.690813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.691025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.691041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.691284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.691300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.691469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.691485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.691587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.691602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.691713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.691727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.691965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.691981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.692075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.692088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.692285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.692302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.692541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.692558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.692791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.692807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.692956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.692975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.693191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.693207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.693429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.693446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.693721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.693738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.693968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.693984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.694221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.694238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.694454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.694470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.694707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.694723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.694814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.694829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.694993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.695009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.695247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.695263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.695362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.695376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.695622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.695639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.758 [2024-07-12 14:32:49.695881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.758 [2024-07-12 14:32:49.695897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.758 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.696070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.696086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.696328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.696344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.696516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.696533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.696692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.696708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.696805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.696820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.696969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.696985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.697152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.697168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.697388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.697405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.697578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.697594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.697832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.697847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.698091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.698107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.698256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.698272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.698452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.698468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.698717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.698733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.698906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.698922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.699016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.699030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.699176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.699193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.699406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.699422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.699522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.699537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.699802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.699818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.700008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.700023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.700192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.700208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.700368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.700396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.700573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.700589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.700829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.700844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.700959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.700975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.701172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.701190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.701429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.701445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.701633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.701650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.701730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.701744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.701970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.701986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.702152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.702168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.702410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.702426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.702661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.702676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.702835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.702850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.703068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.703085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.703301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.703317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.703574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.703590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.703780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.703796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.704013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.704029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.704207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.704223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.704386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.704402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.704645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.704660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.704837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.759 [2024-07-12 14:32:49.704853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.759 qpair failed and we were unable to recover it. 00:27:57.759 [2024-07-12 14:32:49.705039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.705055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.705200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.705215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.705432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.705448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.705613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.705629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.705719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.705733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.705978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.705993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.706103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.706119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.706267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.706283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.706496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.706512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.706774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.706804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.706963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.706976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.707185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.707197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.707386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.707398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.707614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.707627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.707783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.707795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.708011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.708023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.708193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.708205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.708432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.708445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.708535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.708545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.708698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.708711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.708852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.708865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.709072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.709085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.709237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.709253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.709406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.709419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.709613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.709625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:57.760 [2024-07-12 14:32:49.709858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.760 [2024-07-12 14:32:49.709871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:57.760 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.710075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.710087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.710371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.710388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.710622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.710637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.710865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.710875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.711136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.711146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.711264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.711274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.711454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.711464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.711576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.711586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.711868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.711878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.712435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.712451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.712789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.712800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.712957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.712966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.713078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.713087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.713213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.713223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.713413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.713424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.713619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.713630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.713773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.713784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.713874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.713885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.714064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.714075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.714302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.714315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.714467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.714478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.714685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.714696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.714795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.714805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.714998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.715026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.715138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.715154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.715386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.715401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.715569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.715585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.715820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.715835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.715953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.715967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.716076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.716090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.716217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.716231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.716345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.716359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.716536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.716551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.054 qpair failed and we were unable to recover it. 00:27:58.054 [2024-07-12 14:32:49.716763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.054 [2024-07-12 14:32:49.716776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.716910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.716923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.717109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.717123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.717403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.717422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.717634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.717647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.717756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.717770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.718000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.718014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.718176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.718190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.718290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.718306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.718538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.718553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.718719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.718733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.718969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.718983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.719218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.719232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.719387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.719403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.719628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.719643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.719800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.719814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.719973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.720004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.720181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.720196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.720411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.720425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.720591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.720605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.720859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.720872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.721023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.721035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.721242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.721254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.721493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.721506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.721740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.721752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.721849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.721862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.722030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.722043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.722198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.722210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.722321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.722334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.722434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.722447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.722610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.722623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.722857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.722870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.722986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.722998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.723164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.723177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.723414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.723426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.723705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.723718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.723824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.723837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.055 qpair failed and we were unable to recover it. 00:27:58.055 [2024-07-12 14:32:49.724089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.055 [2024-07-12 14:32:49.724102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.724274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.724287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.724505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.724519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.724676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.724689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.724906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.724921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.725142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.725155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.725313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.725327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.725432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.725446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.725662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.725677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.725861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.725875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.726155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.726169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.726404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.726418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.726536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.726548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.726666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.726679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.726791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.726803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.727011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.727024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.727183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.727196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.727402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.727414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.727565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.727578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.727738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.727752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.727925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.727939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.728027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.728040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.728151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.728164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.728339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.728353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.728527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.728541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.728698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.728712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.728899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.728913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.729011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.729025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.729182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.729196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.729339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.729353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.729448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.729462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.729641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.729654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.729759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.729773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.729951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.729986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.730153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.730165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.730433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.730443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.730625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.730635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.730816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.730826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.730925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.730934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.731161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.731171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.056 [2024-07-12 14:32:49.731457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.056 [2024-07-12 14:32:49.731468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.056 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.731676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.731686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.731839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.731849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.732026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.732035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.732268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.732277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.732499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.732509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.732657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.732670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.732760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.732770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.732926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.732935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.733034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.733044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.733261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.733271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.733448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.733458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.733616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.733626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.733760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.733769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.733869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.733879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.734071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.734081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.734307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.734316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.734491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.734501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.734708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.734718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.734942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.734952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.735064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.735074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.735302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.735312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.735502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.735512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.735720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.735730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.735934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.735945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.736189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.736200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.736283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.736293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.736461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.736471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.736574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.736584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.736718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.736728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.736867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.736877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.736971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.736981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.737151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.737161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.737306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.737316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.737518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.737528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.737677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.737687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.737960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.737970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.738171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.738181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.738318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.738328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.738533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.738544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.738681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.738691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.057 [2024-07-12 14:32:49.738837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.057 [2024-07-12 14:32:49.738847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.057 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.738999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.739009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.739238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.739249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.739414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.739425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.739654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.739665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.739871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.739881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.740112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.740122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.740349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.740359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.740530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.740540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.740625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.740635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.740910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.740919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.741160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.741170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.741323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.741333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.741533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.741543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.741710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.741720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.741866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.741876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.742037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.742046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.742198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.742208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.742444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.742454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.742552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.742562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.742685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.742694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.742896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.742906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.743008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.743018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.743173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.743183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.743406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.743417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.743666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.743676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.743833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.743843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.743924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.743933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.744104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.744114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.744251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.744261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.744398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.744409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.744565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.744575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.744661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.744672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.744827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.744837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.058 qpair failed and we were unable to recover it. 00:27:58.058 [2024-07-12 14:32:49.745038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.058 [2024-07-12 14:32:49.745049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.745188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.745199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.745415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.745425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.745577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.745587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.745678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.745688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.745888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.745899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.746100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.746110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.746254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.746265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.746368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.746383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.746637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.746647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.746800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.746809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.747067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.747076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.747158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.747168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.747389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.747399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.747526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.747536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.747740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.747750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.747953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.747962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.748162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.748172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.748389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.748400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.748529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.748539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.748694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.748704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.748848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.748858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.749061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.749071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.749153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.749169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.749395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.749405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.749506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.749516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.749617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.749627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.749717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.749726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.749816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.749826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.749964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.749974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.750118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.750128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.750207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.750216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.750441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.750451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.750521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.750530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.750612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.750622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.750712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.750721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.750876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.750886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.750990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.751000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.751087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.751098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.751237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.751247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.751496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.059 [2024-07-12 14:32:49.751507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-07-12 14:32:49.751716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.751727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.751881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.751891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.752138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.752148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.752302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.752312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.752535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.752545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.752634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.752644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.752814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.752824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.753055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.753066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.753297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.753307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.753578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.753589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.753692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.753702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.753928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.753938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.754095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.754105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.754239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.754249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.754452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.754462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.754598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.754607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.754785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.754795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.754929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.754939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.755191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.755202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.755442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.755453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.755592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.755602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.755758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.755768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.755899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.755908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.756131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.756140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.756236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.756245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.756402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.756412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.756552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.756562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.756782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.756792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.756881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.756890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.756988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.756998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.757086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.757094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.757249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.757258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.757415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.757425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.757561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.757571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.757726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.757735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.758033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.758043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.758218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.758227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.758383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.758395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.758657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.758667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.758821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.758831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.759037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.060 [2024-07-12 14:32:49.759047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-07-12 14:32:49.759227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.759237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.759402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.759412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.759507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.759516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.759719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.759729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.759895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.759905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.760084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.760093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.760261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.760270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.760498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.760508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.760654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.760664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.760811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.760821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.760971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.760981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.761133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.761143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.761347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.761357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.761549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.761559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.761665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.761675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.761777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.761786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.762053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.762063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.762179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.762189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.762390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.762401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.762557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.762567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.762733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.762743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.762879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.762889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.763121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.763130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.763275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.763285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.763462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.763472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.763566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.763575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.763708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.763717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.763886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.763897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.764044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.764054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.764220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.764229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.764408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.764418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.764597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.764607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.764761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.764771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.765004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.765014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.765087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.765097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.765243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.765253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.765343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.765354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.765538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.765549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.765748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.765758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.765940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.765950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-07-12 14:32:49.766167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.061 [2024-07-12 14:32:49.766177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.766253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.766262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.766408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.766418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.766566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.766576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.766734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.766744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.766923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.766932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.767108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.767118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.767209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.767218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.767289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.767299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.767523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.767533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.767696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.767706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.767973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.767983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.768253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.768263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.768415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.768434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.768598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.768608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.768759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.768768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.769009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.769019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.769157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.769166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.769429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.769439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.769640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.769650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.769819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.769829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.770024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.770034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.770293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.770302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.770463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.770473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.770655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.770665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.770752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.770761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.771010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.771020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.771173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.771183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.771355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.771365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.771535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.771545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.771614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.771623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.771837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.771847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.771942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.771951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.772236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.772246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.772472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.772482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.772621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.062 [2024-07-12 14:32:49.772631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.062 qpair failed and we were unable to recover it. 00:27:58.062 [2024-07-12 14:32:49.772699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.772710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.772913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.772922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.773075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.773085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.773247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.773257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.773366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.773376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.773497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.773507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.773600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.773609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.773764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.773774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.774027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.774037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.774175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.774185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.774283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.774293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.774471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.774481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.774630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.774639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.774709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.774718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.774925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.774935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.775099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.775108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.775290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.775300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.775454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.775464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.775686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.775695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.775789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.775801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.775883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.775892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.776089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.776099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.776197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.776207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.776341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.776351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.776482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.776492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.776641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.776651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.776822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.776831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.776937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.776948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.777108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.777118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.777330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.777339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.777431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.777440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.777588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.777598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.777769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.777779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.777917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.777927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.778115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.778125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.778290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.778299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.778476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.778487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.778637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.778647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.778817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.778826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.779022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.779032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.063 [2024-07-12 14:32:49.779261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.063 [2024-07-12 14:32:49.779272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.063 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.779427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.779437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.779580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.779590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.779696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.779706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.779886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.779895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.780054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.780063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.780136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.780145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.780298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.780308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.780469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.780479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.780612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.780621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.780799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.780809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.780975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.780985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.781089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.781099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.781300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.781310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.781521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.781532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.781685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.781694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.781871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.781881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.781978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.781988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.782181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.782191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.782344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.782354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.782504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.782515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.782613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.782623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.782726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.782735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.782827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.782837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.782917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.782926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.783019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.783029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.783214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.783224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.783436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.783446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.783529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.783538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.783700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.783710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.783868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.783878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.783961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.783970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.784121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.784130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.784264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.064 [2024-07-12 14:32:49.784275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.064 qpair failed and we were unable to recover it. 00:27:58.064 [2024-07-12 14:32:49.784531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.784542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.784630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.784639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.784773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.784783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.784867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.784876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.784982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.784992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.785071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.785079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.785280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.785291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.785389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.785398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.785503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.785513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.785645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.785655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.785734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.785744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.785846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.785856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.786043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.786053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.786227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.786237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.786374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.786389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.786626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.786636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.786781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.786790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.786953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.786963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.787183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.787193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.787349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.787358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.787482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.787493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.787641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.787651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.787742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.787751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.787883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.787892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.788069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.788078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.788243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.788254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.788341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.788350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.788587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.788598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.788752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.788761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.788914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.788924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.789181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.789191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.789337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.789347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.789513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.789523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.789610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.789620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.789753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.789762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.789937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.789947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.790139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.790148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.790311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.790320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.790488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.790499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.790654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.790664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.790847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.065 [2024-07-12 14:32:49.790856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.065 qpair failed and we were unable to recover it. 00:27:58.065 [2024-07-12 14:32:49.791058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.791068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.791140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.791149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.791307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.791317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.791555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.791565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.791654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.791663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.791809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.791821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.791954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.791964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.792048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.792057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.792285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.792295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.792395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.792407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.792580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.792590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.792723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.792733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.792877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.792887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.793098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.793108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.793257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.793267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.793503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.793514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.793625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.793634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.793787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.793797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.793938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.793948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.794108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.794118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.794307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.794317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.794471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.794480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.794638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.794648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.794735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.794746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.794893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.794903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.795127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.795137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.795230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.795239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.795428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.795438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.795638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.795648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.795793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.795803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.795955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.795965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.796188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.796198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.796388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.796398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.796549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.796559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.796667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.796676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.796856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.796866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.796977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.796987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.797236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.797245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.797518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.797529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.797680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.797690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.066 [2024-07-12 14:32:49.797839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.066 [2024-07-12 14:32:49.797848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.066 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.798045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.798054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.798214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.798224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.798323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.798332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.798464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.798474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.798568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.798579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.798756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.798766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.798901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.798910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.799110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.799120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.799281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.799291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.799501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.799512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.799684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.799694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.799787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.799797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.800047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.800057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.800304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.800314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.800467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.800477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.800564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.800573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.800723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.800733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.800817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.800826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.801064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.067 [2024-07-12 14:32:49.801074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.067 qpair failed and we were unable to recover it. 00:27:58.067 [2024-07-12 14:32:49.801179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.801189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.801370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.801390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.801465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.801474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.801554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.801563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.801672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.801682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.801853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.801863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.802066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.802076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.802224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.802234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.802374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.802387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.802548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.802557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.802712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.802722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.802807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.802816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.802919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.802929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.803071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.803080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.803304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.803314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.803410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.803421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.803595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.803605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.803687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.803696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.803777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.803789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.803922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.803932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.804105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.804115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.804259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.804269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.804420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.804430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.804496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.804504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.804704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.804714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.804813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.804825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.804925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.804935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.805184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.805194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.805353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.805363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.805570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.805581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.805800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.805810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.805901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.805911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.806002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.806012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.806235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.806245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.806345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.806355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.806512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.806522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.806621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.806631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.806700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.806709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.806867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.806876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.806981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.806990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.807189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.068 [2024-07-12 14:32:49.807199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.068 qpair failed and we were unable to recover it. 00:27:58.068 [2024-07-12 14:32:49.807446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.807455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.807626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.807635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.807731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.807740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.807832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.807841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.808002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.808012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.808261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.808272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.808406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.808416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.808597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.808606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.808828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.808838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.808943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.808952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.809223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.809233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.809470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.809481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.809628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.809638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.809839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.809849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.809945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.809954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.810093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.810103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.810301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.810311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.810515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.810525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.810625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.810635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.810738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.810748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.810894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.810903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.811121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.811131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.811337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.811346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.811440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.811449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.811568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.811580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.811667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.811676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.811758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.811767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.811923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.811932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.812131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.812141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.812324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.812353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.812509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.812546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.812759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.812788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.812982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.812996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.813180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.813208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.813423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.813452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.813637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.813665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.813799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.813829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.814041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.814073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.814320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.814352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.814492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.814506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.069 [2024-07-12 14:32:49.814650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.069 [2024-07-12 14:32:49.814681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.069 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.814968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.814999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.815252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.815283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.815424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.815455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.815610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.815641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.815815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.815847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.816082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.816113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.816323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.816354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.816549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.816561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.816713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.816744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.816927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.816957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.817149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.817183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.817371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.817412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.817595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.817610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.817855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.817887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.818104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.818135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.818387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.818419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.818666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.818699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.818877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.818908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.819203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.819234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.819413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.819446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.819655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.819671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.819831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.819862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.820154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.820185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.820307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.820344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.820544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.820576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.820820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.820851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.821070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.821100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.821252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.821283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.821489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.821522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.821710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.821741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.821917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.821948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.822189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.822221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.822493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.822533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.822641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.822656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.822822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.822853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.823095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.823127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.823393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.823425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.823578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.823609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.823794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.823825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.823959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.823974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.070 [2024-07-12 14:32:49.824157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.070 [2024-07-12 14:32:49.824172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.070 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.824331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.824362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.824575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.824607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.824831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.824862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.825060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.825090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.825221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.825253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.825492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.825525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.825725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.825756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.825892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.825933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.826201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.826216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.826450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.826483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.826600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.826631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.826875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.826906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.827168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.827199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.827400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.827431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.827699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.827730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.827941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.827957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.828220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.828263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.828464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.828495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.828695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.828726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.828930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.828946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.829124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.829155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.829422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.829453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.829637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.829652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.829772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.829800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.830038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.830069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.830249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.830279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.830531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.830564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.830788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.830804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.830985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.831016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.831276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.831307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.831592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.831623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.831834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.831865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.832012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.832044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.832252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.832283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.832503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.071 [2024-07-12 14:32:49.832534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-07-12 14:32:49.832777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.832808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.832957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.832989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.833181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.833212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.833399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.833432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.833624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.833655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.833840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.833855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.834006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.834037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.834255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.834286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.834448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.834481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.834606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.834642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.834801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.834817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.835059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.835075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.835336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.835367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.835529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.835561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.835807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.835843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.836036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.836067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.836262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.836294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.836490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.836523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.836726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.836757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.836895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.836925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.837123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.837154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.837422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.837455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.837588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.837604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.837789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.837821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.838052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.838084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.838283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.838314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.838508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.838541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.838675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.838706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.838955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.838987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.839240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.839271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.839414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.839446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.839692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.839724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.839912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.839943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.840190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.840222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.840349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.840390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.840649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.840680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.840875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.840906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.841194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.841226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.841424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.841458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.841607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.841638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-07-12 14:32:49.841824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.072 [2024-07-12 14:32:49.841855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.842163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.842194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.842466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.842505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.842779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.842810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.843023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.843054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.843322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.843353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.843641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.843657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.843829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.843844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.843957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.843989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.844193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.844224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.844443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.844475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.844679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.844695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.844879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.844911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.845186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.845218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.845491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.845528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.845747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.845779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.845911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.845927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.846092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.846108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.846260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.846291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.846510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.846543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.846749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.846766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.846927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.846942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.847258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.847273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.847375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.847397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.847559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.847575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.847858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.847874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.848074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.848089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.848238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.848268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.848427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.848443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.848617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.848633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.848870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.848902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.849098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.849130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.849409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.849442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.849654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.849685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.849941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.849973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.850166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.850182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.850400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.850433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.850705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.850735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.850942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.850974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.851128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.851159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.851485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.851517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-07-12 14:32:49.851739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.073 [2024-07-12 14:32:49.851770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.851960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.851991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.852217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.852247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.852523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.852555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.852754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.852786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.853047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.853078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.853342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.853374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.853515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.853548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.853765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.853797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.854072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.854104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.854295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.854326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.854541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.854574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.854722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.854738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.854965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.855002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.855225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.855256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.855459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.855492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.855642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.855658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.855783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.855817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.856012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.856043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.856315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.856345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.856556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.856589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.856794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.856825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.856958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.856998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.857092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.857106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.857324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.857356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.857624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.857655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.857856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.857895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.858176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.858192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.858390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.858408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.858634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.858650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.858797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.858813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.859060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.859092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.859273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.859305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.859487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.859520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.859673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.859705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.859984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.860015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.860302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.860334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.860540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.860572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.860770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.860801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.860997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.861028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.861223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.861254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.074 qpair failed and we were unable to recover it. 00:27:58.074 [2024-07-12 14:32:49.861474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.074 [2024-07-12 14:32:49.861506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.861704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.861719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.861884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.861916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.862171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.862203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.862412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.862445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.862579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.862594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.862743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.862759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.862997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.863013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.863215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.863232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.863354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.863368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.863649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.863665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.863823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.863838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.863947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.863966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.864143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.864159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.864350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.864392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.864592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.864624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.864813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.864844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.865128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.865159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.865276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.865305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.865460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.865492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.865631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.865663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.865793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.865824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.866062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.866078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.866329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.866372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.866587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.866619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.866839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.866854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.866975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.867006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.867297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.867328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.867482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.867515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.867722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.867753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.867903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.867935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.868164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.868196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.868408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.868441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.868639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.868670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.868915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.868946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.869229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.869264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.869464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.869496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.869631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.869662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.869881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.869913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.870119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.870151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.870424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.075 [2024-07-12 14:32:49.870456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.075 qpair failed and we were unable to recover it. 00:27:58.075 [2024-07-12 14:32:49.870657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.870688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.870813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.870844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.870987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.871003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.871160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.871176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.871389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.871406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.871527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.871543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.871654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.871670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.871747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.871761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.871862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.871877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.872049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.872065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.872286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.872318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.872436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.872485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.872634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.872666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.872840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.872855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.873026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.873057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.873209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.873240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.873450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.873484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.873627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.873658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.873802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.873833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.874166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.874197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.874334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.874366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.874534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.874566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.874811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.874843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.875039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.875070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.875303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.875335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.875483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.875516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.875655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.875686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.875819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.875864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.875981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.875996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.876157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.876188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.876406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.876438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.876576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.876608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.876797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.876813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.877010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.877041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.877174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.877206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.877459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.877501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.076 qpair failed and we were unable to recover it. 00:27:58.076 [2024-07-12 14:32:49.877656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.076 [2024-07-12 14:32:49.877672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.877829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.877861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.878113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.878144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.878399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.878432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.878634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.878664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.878883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.878915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.879095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.879125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.879266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.879297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.879570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.879603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.879846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.879877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.880015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.880046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.880270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.880301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.880499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.880532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.880666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.880697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.880879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.880910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.881081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.881099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.881322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.881337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.881502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.881518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.881700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.881731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.881878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.881910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.882204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.882235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.882403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.882436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.882588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.882619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.882891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.882923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.883188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.883219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.883500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.883535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.883645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.883661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.883781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.883797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.883910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.883925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.884049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.884065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.884232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.884248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.884399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.884431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.884583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.884614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.884800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.884831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.884983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.884999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.885222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.885254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.885425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.885458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.885584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.885616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.885751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.885783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.885927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.885958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.886215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.886247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.077 qpair failed and we were unable to recover it. 00:27:58.077 [2024-07-12 14:32:49.886513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.077 [2024-07-12 14:32:49.886547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.886805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.886822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.886985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.887001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.887154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.887185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.887299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.887330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.887569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.887601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.887798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.887814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.887921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.887937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.888199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.888234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.888373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.888414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.888555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.888586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.888726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.888758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.888955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.888986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.889179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.889212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.889412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.889450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.889653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.889684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.889836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.889868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.890140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.890156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.890329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.890360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.890578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.890609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.890799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.890814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.890973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.890989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.891258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.891290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.891498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.891532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.891732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.891763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.891958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.891989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.892112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.892127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.892310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.892326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.892421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.892437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.892570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.892601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.892735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.892767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.893049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.893080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.893328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.893359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.893579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.893610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.893815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.893848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.894054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.894085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.894403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.894435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.894619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.894650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.894792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.894823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.894974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.894989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.895185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.078 [2024-07-12 14:32:49.895217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.078 qpair failed and we were unable to recover it. 00:27:58.078 [2024-07-12 14:32:49.895399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.895471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.895696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.895732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.895918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.895935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.896061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.896094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.896349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.896394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.896603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.896636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.896786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.896817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.897045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.897061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.897229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.897261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.897479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.897514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.897776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.897809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.898017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.898048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.898176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.898210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.898462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.898504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.898659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.898691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.898895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.898928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.899060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.899091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.899362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.899402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.899603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.899635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.899828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.899861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.900080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.900113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.900365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.900412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.900564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.900597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.900806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.900838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.900971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.901003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.901222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.901255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.901507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.901540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.901746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.901778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.901917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.901948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.902195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.902228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.902344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.902376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.902591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.902624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.902915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.902948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.903075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.903107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.903354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.903394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.903542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.903574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.903698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.903730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.903940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.903973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.904102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.904134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.904331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.904364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.904590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.079 [2024-07-12 14:32:49.904626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.079 qpair failed and we were unable to recover it. 00:27:58.079 [2024-07-12 14:32:49.904827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.904858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.905137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.905170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.905431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.905464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.905662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.905694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.905844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.905875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.906073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.906089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.906319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.906351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.906569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.906602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.906870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.906902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.907041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.907072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.907319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.907351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.907641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.907674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.907923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.907960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.908160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.908191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.908482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.908518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.908816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.908848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.909102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.909135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.909416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.909449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.909723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.909755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.909876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.909907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.910181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.910197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.910356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.910373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.910557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.910574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.910815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.910847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.911124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.911156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.911387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.911420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.911631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.911664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.911888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.911921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.912192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.912225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.912433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.912473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.912666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.912699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.912904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.912936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.913184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.913216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.913420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.913453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.913594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.080 [2024-07-12 14:32:49.913636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.080 qpair failed and we were unable to recover it. 00:27:58.080 [2024-07-12 14:32:49.913773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.913790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.913907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.913924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.914216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.914248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.914549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.914582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.914775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.914808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.915119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.915151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.915415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.915447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.915652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.915685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.915887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.915920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.916129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.916146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.916329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.916361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.916573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.916608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.916861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.916894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.917110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.917142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.917414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.917448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.917659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.917691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.917879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.917911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.918029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.918047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.918269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.918302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.918537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.918570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.918772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.918804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.918937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.918970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.919294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.919326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.919538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.919571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.919775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.919808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.920022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.920039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.920210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.920226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.920338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.920370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.920529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.920565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.920704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.920736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.920861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.920892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.921200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.921233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.921514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.921548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.921831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.921863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.922156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.922189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.922394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.922426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.922717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.922749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.923005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.923037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.923168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.923200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.923406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.923439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.923577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.081 [2024-07-12 14:32:49.923608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.081 qpair failed and we were unable to recover it. 00:27:58.081 [2024-07-12 14:32:49.923780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.923811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.924027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.924071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.924311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.924327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2704141 Killed "${NVMF_APP[@]}" "$@" 00:27:58.082 [2024-07-12 14:32:49.924598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.924637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.924762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.924778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.924868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.924883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.925059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.925076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 14:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:58.082 [2024-07-12 14:32:49.925268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.925284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.925461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.925480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 14:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:58.082 [2024-07-12 14:32:49.925597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.925613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.925733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.925751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 14:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:58.082 [2024-07-12 14:32:49.925840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.925856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 14:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:58.082 [2024-07-12 14:32:49.926115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.926131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 14:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:58.082 [2024-07-12 14:32:49.926367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.926391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.926614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.926630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.926790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.926805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.927054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.927070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.927165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.927180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.927365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.927387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.927576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.927592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.927699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.927715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.927896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.927913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.928016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.928031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.928215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.928231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.928536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.928552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.928721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.928738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.928975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.928992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.929151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.929171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.929335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.929351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.929617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.929633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.929751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.929767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.929928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.929944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.930178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.930194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.930417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.930434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.930599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.930615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.930712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.930727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.930992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.931008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.931120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.931136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.082 qpair failed and we were unable to recover it. 00:27:58.082 [2024-07-12 14:32:49.931286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.082 [2024-07-12 14:32:49.931301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.931492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.931506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.931620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.931634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.931797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.931812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.932094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.932110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.932288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.932304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.932462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.932479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.932662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.932678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.932780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.932795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.932969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 14:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2704976 00:27:58.083 [2024-07-12 14:32:49.932985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.933232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.933249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 14:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2704976 00:27:58.083 [2024-07-12 14:32:49.933428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.933445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 14:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:58.083 14:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2704976 ']' 00:27:58.083 [2024-07-12 14:32:49.933724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.933740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.933905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.933921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 14:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.083 [2024-07-12 14:32:49.934185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.934217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 14:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:58.083 [2024-07-12 14:32:49.934400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.934415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 14:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.083 [2024-07-12 14:32:49.934592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.934605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.934757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.934770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 14:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:58.083 [2024-07-12 14:32:49.934929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.934942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 14:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:58.083 [2024-07-12 14:32:49.935160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.935174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.935406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.935419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.935570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.935582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.935788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.935801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.935952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.935964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.936051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.936063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.936163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.936174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.936386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.936399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.936586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.936599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.936770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.936782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.936888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.936900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.937123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.937136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.937342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.937355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.937542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.937555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.937697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.937710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.937885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.937897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.938076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.938089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.938310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.083 [2024-07-12 14:32:49.938323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.083 qpair failed and we were unable to recover it. 00:27:58.083 [2024-07-12 14:32:49.938487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.938500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.938599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.938613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.938823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.938836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.938995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.939008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.939087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.939099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.939366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.939383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.939612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.939625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.939767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.939781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.939883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.939895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.940185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.940197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.940409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.940421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.940584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.940596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.940695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.940706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.940809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.940820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.941105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.941117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.941201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.941211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.941367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.941382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.941598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.941611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.941704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.941714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.941859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.941872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.942017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.942029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.942181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.942193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.942283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.942294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.942440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.942453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.942540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.942550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.942724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.942736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.942844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.942854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.942961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.942974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.943127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.943139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.943241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.943252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.943404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.943417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.943532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.943543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.943778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.943790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.943952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.943964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.944201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.944214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.944372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.084 [2024-07-12 14:32:49.944389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-07-12 14:32:49.944504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.944516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.944674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.944686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.944831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.944843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.945094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.945106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.945341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.945353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.945534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.945548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.945629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.945640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.945781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.945793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.945892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.945903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.946075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.946087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.946186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.946199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.946387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.946400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.946478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.946489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.946711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.946724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.946896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.946908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.947133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.947145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.947246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.947257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.947423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.947435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.947583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.947595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.947768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.947780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.947855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.947865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.948009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.948021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.948272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.948284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.948385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.948396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.948535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.948546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.948707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.948719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.948878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.948890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.948978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.948990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.949202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.949215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.949439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.949452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.949659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.949671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.949827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.949840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.950027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.950039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.950207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.950220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.950375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.950390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.950576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.950588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.950680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.950691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.950848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.950861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.951012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.951024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.951166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.951178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-07-12 14:32:49.951280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.085 [2024-07-12 14:32:49.951291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.951452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.951465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.951650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.951662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.951846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.951858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.952066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.952079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.952250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.952264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.952501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.952514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.952605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.952616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.952709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.952719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.952926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.952938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.953167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.953179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.953375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.953393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.953574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.953586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.953746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.953758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.953906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.953918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.954108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.954120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.954384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.954396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.954551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.954563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.954726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.954738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.954896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.954908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.955117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.955129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.955284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.955296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.955407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.955418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.955624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.955636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.955783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.955795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.955947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.955959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.956182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.956193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.956352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.956365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.956579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.956592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.956746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.956759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.956839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.956850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.957131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.957143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.957330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.957342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.957481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.957494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.957709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.957722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.957882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.957895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.958111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.958123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.958297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.958309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.958512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.958525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.958675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.958687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.958783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.958795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-07-12 14:32:49.958934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.086 [2024-07-12 14:32:49.958946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.959088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.959100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.959243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.959254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.959464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.959476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.959682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.959696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.959853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.959865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.960087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.960099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.960336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.960347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.960562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.960574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.960719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.960731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.960873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.960885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.963481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.963495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.963719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.963731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.963947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.963959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.964124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.964136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.964319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.964332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.964495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.964508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.964592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.964603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.964718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.964728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.964890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.964901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.965047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.965058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.965216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.965228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.965369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.965394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.965558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.965571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.965794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.965806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.965888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.965899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.966096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.966108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.966260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.966272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.966504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.966516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.966620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.966631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.966739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.966749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.966901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.966912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.967091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.967103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.967308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.967320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.967502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.967514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.967672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.967684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.967817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.967828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.967976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.967988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.968091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.968101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.968323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.968335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.968483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.968495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-07-12 14:32:49.968645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.087 [2024-07-12 14:32:49.968657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.968834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.968846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.968940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.968952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.969103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.969116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.969265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.969278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.969509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.969522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.969631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.969641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.969799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.969811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.970108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.970120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.970215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.970226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.970388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.970401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.970504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.970514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.970627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.970637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.970725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.970736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.970843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.970854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.970947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.970958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.971106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.971119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.971360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.971372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.971539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.971551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.971719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.971732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.971811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.971821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.972000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.972012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.972164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.972176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.972270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.972281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.972513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.972525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.972625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.972636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.972773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.972785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.972856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.972867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.972942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.972954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.973054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.973065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.973208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.973219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.973366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.973383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.973631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.973645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.973798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.973810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.973950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.973963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.088 [2024-07-12 14:32:49.974130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.088 [2024-07-12 14:32:49.974142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.088 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.974327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.974339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.974543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.974555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.974647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.974658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.974742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.974753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.974824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.974834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.974972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.974984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.975102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.975114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.975293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.975307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.975398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.975409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.975511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.975523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.975658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.975670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.975809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.975821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.975975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.975986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.976062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.976072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.976274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.976286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.976388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.976399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.976490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.976501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.976622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.976634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.976791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.976803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.976900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.976910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.976979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.976990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.977213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.977226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.977386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.977398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.977554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.977566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.977719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.977731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.977815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.977826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.978017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.978029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.978121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.978132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.978274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.978286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.978461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.978474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.978680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.978692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.978896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.978908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.978981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.978992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.979132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.089 [2024-07-12 14:32:49.979144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.089 qpair failed and we were unable to recover it. 00:27:58.089 [2024-07-12 14:32:49.979296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.979308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.979442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.979454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.979538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.979549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.979740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.979752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.979861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.979874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.980051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.980064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.980285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.980296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.980537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.980550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.980632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.980642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.980788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.980800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.980882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.980892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.981066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.981079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.981192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.981203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.981438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.981452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.981609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.981621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.981680] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:27:58.090 [2024-07-12 14:32:49.981705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.981716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 [2024-07-12 14:32:49.981720] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:qpair failed and we were unable to recover it. 00:27:58.090 5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.090 [2024-07-12 14:32:49.981870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.981880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.982092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.982102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.982353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.982363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.982615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.982627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.982829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.982840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.983078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.983090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.983254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.983266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.983491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.983505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.983654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.983665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.983845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.983858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.983956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.983966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.984121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.984133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.984289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.984301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.984407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.984418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.984568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.984579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.984742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.984754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.985008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.985020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.985115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.985126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.985335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.090 [2024-07-12 14:32:49.985346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.090 qpair failed and we were unable to recover it. 00:27:58.090 [2024-07-12 14:32:49.985501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.985513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.985724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.985735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.985816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.985826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.985926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.985938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.986268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.986280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.986488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.986500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.986592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.986604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.986751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.986763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.986864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.986876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.987131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.987142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.987280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.987291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.987358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.987369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.987517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.987531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.987624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.987636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.987785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.987797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.987883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.987893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.988046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.988058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.988274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.988287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.988513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.988525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.988633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.988644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.988789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.988800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.989069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.989080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.989223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.989234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.989386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.989397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.989508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.989519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.989612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.989622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.989759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.989770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.989853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.989863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.990016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.990027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.990198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.990210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.990341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.990354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.990535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.990547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.990687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.990699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.990881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.990892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.991107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.991119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.991287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.991299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.991444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.991456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.991658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.991670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.991836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.991848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.992011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.992023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.091 [2024-07-12 14:32:49.992173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.091 [2024-07-12 14:32:49.992184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.091 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.992332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.992343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.992479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.992492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.992581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.992591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.992807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.992819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.992973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.992985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.993200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.993211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.993439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.993451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.993561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.993573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.993672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.993684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.993855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.993866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.993941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.993952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.994201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.994213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.994365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.994381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.994529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.994541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.994647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.994659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.994744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.994755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.994842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.994855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.994954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.994966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.995148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.995160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.995396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.995409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.995498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.995509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.995584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.995594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.995802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.995814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.995913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.995924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.996118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.996130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.996340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.996351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.996581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.996594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.996764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.996775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.996872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.996884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.997125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.997137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.997392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.997405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.997507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.997518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.997720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.997731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.997895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.997907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.997993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.998003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.092 [2024-07-12 14:32:49.998241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.092 [2024-07-12 14:32:49.998253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.092 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:49.998458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:49.998470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:49.998625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:49.998637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:49.998793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:49.998805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:49.998958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:49.998971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:49.999126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:49.999138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:49.999327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:49.999339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:49.999518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:49.999530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:49.999629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:49.999639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:49.999740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:49.999750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:49.999910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:49.999922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.000070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.000081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.000262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.000275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.000423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.000436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.000545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.000557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.000735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.000747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.000916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.000928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.001144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.001156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.001296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.001309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.001424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.001436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.001510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.001521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.001751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.001766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.001859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.001871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.001990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.002002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.002196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.002208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.002292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.002303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.002467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.002482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.002701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.002713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.002869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.002881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.003075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.003086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.003290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.003301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.003390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.003401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.003493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.003506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.003648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.003659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.003746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.003756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.093 [2024-07-12 14:32:50.003850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.093 [2024-07-12 14:32:50.003861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.093 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.003968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.003980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.004154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.004167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.004316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.004328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.004504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.004516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.004717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.004729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.004869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.004881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.005077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.005088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.005240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.005253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.005403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.005415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.005578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.005591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.005768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.005780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.005859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.005870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.005955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.005968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.006052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.006063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.006216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.006228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.006389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.006402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.006612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.006624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.006810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.006822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.006925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.006936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.007113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.007125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.007376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.007401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.007559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.007571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.007667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.007678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.007857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.007870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.008063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.008075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.008168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.008182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.008328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.008339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.008515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.008527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.008620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.008633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.008781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.008793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.008876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.008888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.009034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.009047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.009207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.009219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.009429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.094 [2024-07-12 14:32:50.009442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.009543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.094 [2024-07-12 14:32:50.009556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.094 qpair failed and we were unable to recover it. 00:27:58.094 [2024-07-12 14:32:50.009694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.009706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.009790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.009801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.009902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.009914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.010075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.010090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.010189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.010201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.010301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.010313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.010418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.010430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.010590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.010602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.010771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.010782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.010873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.010886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.010979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.010991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.011143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.011155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.011339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.011351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.011512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.011525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.011685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.011697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.011789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.011800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.011958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.011970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.012152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.012164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.012401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.012413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.012581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.012593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.012739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.012751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.012855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.012868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.012950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.012961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.013053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.013064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.013248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.013261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.013413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.013427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.013612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.013625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.013828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.013841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.013949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.013961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.014062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.014074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.014213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.014225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.014500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.014512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.014687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.014699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.014903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.014915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.015083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.015095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.015253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.015266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.015477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.015489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.015668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.015681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.015769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.015780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.015869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.015881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.015972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.015982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.016139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.016151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.016301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.016314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.095 qpair failed and we were unable to recover it. 00:27:58.095 [2024-07-12 14:32:50.016567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.095 [2024-07-12 14:32:50.016581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.016679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.016691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.016847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.016861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.017040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.017052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.017266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.017277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.017439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.017451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.017542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.017554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.017655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.017667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.017818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.017830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.017974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.017987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.018187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.018198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.018343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.018355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.018543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.018555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.018807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.018819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.019090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.019102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.019236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.019248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.019414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.019426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.019648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.019661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.019890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.019902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.020102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.020114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.020342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.020353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.020621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.020633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.020782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.020795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.020937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.020949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.021033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.021045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.021197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.021208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.021365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.021382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.021593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.021605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.021808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.021820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.021973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.021985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.022225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.022238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.022319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.022329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.022496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.022509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.022654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.022666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.022751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.022762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.022913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.022924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.023094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.023106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.023190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.023201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.023352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.023364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.023547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.023560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.023718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.023732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.023943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.023955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.024133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.024145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.024330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.024342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.096 [2024-07-12 14:32:50.024467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.096 [2024-07-12 14:32:50.024481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.096 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.024629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.024643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.024748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.024761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.024864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.024882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.025066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.025083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.025257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.025285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.025435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.025458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.025579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.025595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.025716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.025731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.025851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.025867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.026001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.026019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.026158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.026191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.026437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.026465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.026628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.026655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.026819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.026864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.027065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.027151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.027286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.027317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.027499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.027529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.027795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.027831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.028088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.028148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.028326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.028362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.028632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.028666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.028892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.028943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.029173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.029211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.029344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.029371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.029607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.029649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.029812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.029875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.030078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.030107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.030260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.030275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.030358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.030372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.030489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.030502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.030583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.030594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.030702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.030714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.030797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.030809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.030892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.030904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.030999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.031011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.031106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.031118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.031264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.031275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.031384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.097 [2024-07-12 14:32:50.031396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.097 qpair failed and we were unable to recover it. 00:27:58.097 [2024-07-12 14:32:50.031474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.098 [2024-07-12 14:32:50.031485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.098 qpair failed and we were unable to recover it. 00:27:58.098 [2024-07-12 14:32:50.031621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.098 [2024-07-12 14:32:50.031633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.098 qpair failed and we were unable to recover it. 00:27:58.098 [2024-07-12 14:32:50.031741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.098 [2024-07-12 14:32:50.031753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.098 qpair failed and we were unable to recover it. 00:27:58.098 [2024-07-12 14:32:50.031911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.098 [2024-07-12 14:32:50.031923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.098 qpair failed and we were unable to recover it. 00:27:58.098 [2024-07-12 14:32:50.032182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.098 [2024-07-12 14:32:50.032194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.098 qpair failed and we were unable to recover it. 00:27:58.098 [2024-07-12 14:32:50.032337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.098 [2024-07-12 14:32:50.032350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.098 qpair failed and we were unable to recover it. 00:27:58.098 [2024-07-12 14:32:50.032523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.098 [2024-07-12 14:32:50.032537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.098 qpair failed and we were unable to recover it. 00:27:58.098 [2024-07-12 14:32:50.032636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.098 [2024-07-12 14:32:50.032647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.098 qpair failed and we were unable to recover it. 00:27:58.098 [2024-07-12 14:32:50.032758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.098 [2024-07-12 14:32:50.032768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.098 qpair failed and we were unable to recover it. 00:27:58.098 [2024-07-12 14:32:50.032871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.098 [2024-07-12 14:32:50.032882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.098 qpair failed and we were unable to recover it. 00:27:58.098 [2024-07-12 14:32:50.033066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.098 [2024-07-12 14:32:50.033079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.098 qpair failed and we were unable to recover it. 00:27:58.098 [2024-07-12 14:32:50.033232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.098 [2024-07-12 14:32:50.033244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.098 qpair failed and we were unable to recover it. 00:27:58.098 [2024-07-12 14:32:50.033405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.098 [2024-07-12 14:32:50.033417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.098 qpair failed and we were unable to recover it. 00:27:58.098 [2024-07-12 14:32:50.033575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.098 [2024-07-12 14:32:50.033587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.098 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.033739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.033751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.033905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.033917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.034014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.034024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.034176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.034188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.034335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.034347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.034420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.034431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.034585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.034596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.034777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.034790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.034940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.034953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.035114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.035127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.035322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.035336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.035514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.035527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.035638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.035649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.035743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.035754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.035898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.035910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.036128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.036139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.036240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.036251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.036349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.036361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.036458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.036469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.036650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.036662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.376 [2024-07-12 14:32:50.036836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.376 [2024-07-12 14:32:50.036848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.376 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.036939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.036949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.037040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.037050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.037213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.037225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.037374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.037392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.037524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.037536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.037626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.037636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.037791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.037803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.037881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.037891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.037990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.038001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.038085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.038096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.038164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.038175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.038310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.038320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.038523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.038536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.038628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.038638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.038786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.038797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.038893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.038904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.039094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.039106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.039272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.039283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.039481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.039493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.039581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.039591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.039743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.039755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.039969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.039980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.040133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.040145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.040305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.040318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.040538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.040550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.040707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.040719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.040885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.040897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.041079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.041091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.041247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.041260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.041408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.041422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.041637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.041649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.041749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.041759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.041908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.041919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.042143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.042155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.042329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.042341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.042513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.042526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.042675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.042687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.042839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.377 [2024-07-12 14:32:50.042851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.377 qpair failed and we were unable to recover it. 00:27:58.377 [2024-07-12 14:32:50.042988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.042999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.043247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.043259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.043523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.043535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.043613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.043624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.043731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.043742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.043949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.043960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.044177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.044190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.044275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.044286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.044372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.044387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.044484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.044495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.044610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.044621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.044767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.044779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.044873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.044884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.045035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.045047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.045184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.045196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.045335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.045346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.045495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.045507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.045705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.045717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.045864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.045876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.046118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.046130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.046299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.046310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.046458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.046470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.046621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.046634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.046724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.046735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.046804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.046815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.047000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.047012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.047243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.047255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.047405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.047417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.047620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.047632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.047809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.047821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.048045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.048056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.048203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.048217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.048350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.048361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.048504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.048516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.048656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.048667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.048766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.048778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.048929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.378 [2024-07-12 14:32:50.048941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.378 qpair failed and we were unable to recover it. 00:27:58.378 [2024-07-12 14:32:50.049040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.049052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.049262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.049274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.049504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.049515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.049698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.049710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.049846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.049858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.049959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.049970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.050195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.050208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.050364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.050375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.050536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.050547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.050768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.050780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.050915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.050927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.051027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.051039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.051176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.051187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.051330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.051342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.051591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.051603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.051750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.051761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.051845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.051857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.052065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.052077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.052222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.052233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.052455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.052467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.052708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.052719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.052870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.052883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.053128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.053140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.053280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.053293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.053515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.053527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.053683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.053695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.053778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.053788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.053900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.053911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.054124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.054135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.054295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.054306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.054395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.054407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.054517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.054529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.054680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.054692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.054838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.054851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.055000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.055014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.055194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.055206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.055330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:58.379 [2024-07-12 14:32:50.055345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.055355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.055509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.055520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.055627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.055638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.055863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.379 [2024-07-12 14:32:50.055875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.379 qpair failed and we were unable to recover it. 00:27:58.379 [2024-07-12 14:32:50.056020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.056032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.056207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.056219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.056388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.056401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.056552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.056564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.056703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.056715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.056888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.056901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.056972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.056983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.057209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.057223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.057401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.057413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.057571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.057583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.057685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.057697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.057895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.057908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.058130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.058144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.058227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.058240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.058405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.058420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.058644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.058656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.058791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.058803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.058892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.058904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.059163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.059176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.059280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.059292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.059543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.059555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.059716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.059728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.059913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.059924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.060142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.060155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.060264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.060276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.060429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.060442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.060607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.060619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.060714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.060726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.060819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.060831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.061002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.061015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.061243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.061254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.061517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.061530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.061759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.380 [2024-07-12 14:32:50.061770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.380 qpair failed and we were unable to recover it. 00:27:58.380 [2024-07-12 14:32:50.062002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.062015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.062173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.062185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.062324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.062336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.062503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.062522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.062625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.062636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.062723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.062735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.062890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.062902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.063088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.063100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.063302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.063315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.063467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.063480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.063650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.063662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.063816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.063828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.063935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.063948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.064108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.064120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.064208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.064223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.064460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.064473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.064623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.064636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.064735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.064747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.064847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.064859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.065017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.065029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.065109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.065121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.065370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.065389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.065537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.065550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.065687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.065699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.065791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.065803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.066045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.066057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.066204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.066216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.066387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.066399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.066565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.066577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.066780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.066792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.066872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.066883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.067186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.067198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.067410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.067422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.067579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.067591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.067756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.067768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.067922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.067935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.068178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.068190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.068466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.068478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.068571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.068584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.068663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.068674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.381 [2024-07-12 14:32:50.068777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.381 [2024-07-12 14:32:50.068788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.381 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.068992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.069004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.069213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.069226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.069315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.069327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.069471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.069484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.069638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.069649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.069790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.069802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.069902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.069913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.070180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.070193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.070335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.070348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.070586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.070599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.070694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.070705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.070805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.070816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.070972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.070984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.071235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.071251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.071390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.071402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.071492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.071503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.071681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.071692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.071786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.071798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.071952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.071964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.072132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.072144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.072230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.072241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.072395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.072408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.072559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.072571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.072710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.072722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.072870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.072882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.073022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.073034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.073167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.073179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.073262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.073272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.073427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.073440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.073547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.073558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.073640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.073650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.073883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.073894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.074068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.074080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.074249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.074261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.074357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.074367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.074553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.074566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.074713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.074725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.074897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.074909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.075146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.075158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.075317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.075328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.382 [2024-07-12 14:32:50.075529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.382 [2024-07-12 14:32:50.075573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.382 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.075736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.075772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.075935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.075952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.076212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.076228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.076437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.076453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.076712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.076728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.076840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.076855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.076952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.076967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.077128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.077144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.077247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.077264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.077477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.077492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.077598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.077613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.077799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.077815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.077973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.077992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.078151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.078166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.078342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.078358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.078626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.078641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.078805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.078821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.079070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.079085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.079237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.079253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.079454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.079470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.079730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.079746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.080007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.080022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.080119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.080133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.080225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.080241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.080395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.080410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.080534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.080550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.080719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.080735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.080898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.080914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.081022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.081037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.081133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.081147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.081374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.081394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.081517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.081533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.081693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.081708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.081812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.081828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.081982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.081998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.082265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.082282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.082440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.082456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.082667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.082683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.082836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.082851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.083135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.083156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.383 qpair failed and we were unable to recover it. 00:27:58.383 [2024-07-12 14:32:50.083316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.383 [2024-07-12 14:32:50.083332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.083497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.083511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.083615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.083628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.083793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.083804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.083905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.083916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.084210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.084222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.084384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.084395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.084546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.084559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.084727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.084739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.084887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.084899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.085153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.085165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.085320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.085331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.085480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.085493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.085690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.085701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.085803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.085814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.086055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.086067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.086245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.086257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.086392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.086405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.086538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.086549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.086629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.086639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.086722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.086732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.086866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.086878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.087115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.087126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.087297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.087309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.087510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.087522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.087677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.087688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.087846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.087858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.088037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.088049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.088249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.088261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.088465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.088477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.088576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.088588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.088741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.088752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.088845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.088856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.088940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.088950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.089130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.089142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.089277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.089289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.089500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.089513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.089674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.089686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.089790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.089802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.089979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.384 [2024-07-12 14:32:50.089991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.384 qpair failed and we were unable to recover it. 00:27:58.384 [2024-07-12 14:32:50.090149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.090161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.090296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.090307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.090575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.090588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.090745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.090757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.090850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.090860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.090944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.090955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.091220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.091232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.091391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.091404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.091655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.091667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.091812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.091824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.091904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.091915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.092157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.092169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.092256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.092269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.092470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.092482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.092634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.092646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.092733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.092744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.092886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.092903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.093004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.093016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.093175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.093187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.093329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.093341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.093493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.093505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.093642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.093654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.093801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.093812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.094036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.094047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.094214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.094226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.094438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.094450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.094535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.094546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.094628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.094639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.094847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.094858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.095012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.095023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.095275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.095287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.095478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.095491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.095659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.095670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.095777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.095788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.096017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.096029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.385 qpair failed and we were unable to recover it. 00:27:58.385 [2024-07-12 14:32:50.096278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.385 [2024-07-12 14:32:50.096289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.096463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.096476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.096582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.096594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.096670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.096681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.096783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.096801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.096894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.096909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.096995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.097010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.097218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.097233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.097327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.097342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.097507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.097525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.097609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.097623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.097718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.097734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.097841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.097857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.098020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.098037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.098199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.098215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.098462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.098480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.098602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.098618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.098774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.098796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.098900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.098916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.099078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.099094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.099325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.099342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.099556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.099574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.099673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.099688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.099792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.099807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.099889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.099905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.100108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.100125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.100334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.100351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.100521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.100537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.100724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.100742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.100896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.100912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.101093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.101110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.101346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.101365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.101612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.101629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.101779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.101796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.102029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.102045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.102258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.102275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.102439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.102458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.102708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.102724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.102890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.102907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.103151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.103169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.103341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.103358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.386 qpair failed and we were unable to recover it. 00:27:58.386 [2024-07-12 14:32:50.103525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.386 [2024-07-12 14:32:50.103542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.103707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.103724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.103896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.103912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.104164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.104192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.104306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.104322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.104560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.104577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.104738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.104755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.104861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.104877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.105046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.105062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.105208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.105224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.105460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.105476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.105572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.105588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.105686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.105702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.105901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.105916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.106198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.106214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.106383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.106399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.106503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.106519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.106634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.106650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.106737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.106753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.107001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.107017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.107235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.107250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.107434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.107450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.107551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.107567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.107815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.107831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.107926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.107942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.108046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.108062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.108294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.108309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.108512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.108527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.108688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.108704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.108882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.108898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.109169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.109185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.109465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.109482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.109590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.109605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.109836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.109852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.110045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.110061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.110290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.110306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.110525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.110541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.110688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.110704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.110817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.110832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.111050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.111066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.111342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.387 [2024-07-12 14:32:50.111357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.387 qpair failed and we were unable to recover it. 00:27:58.387 [2024-07-12 14:32:50.111461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.111477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.111658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.111673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.111835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.111854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.111969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.111984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.112074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.112089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.112272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.112287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.112469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.112486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.112647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.112662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.112762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.112778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.112882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.112896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.113086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.113102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.113302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.113318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.113512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.113528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.113679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.113694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.113776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.113790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.113947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.113962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.114213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.114227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.114373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.114392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.114515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.114531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.114644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.114660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.114755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.114770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.114924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.114943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.115179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.115191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.115329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.115341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.115543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.115556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.115663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.115674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.115830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.115842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.116086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.116098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.116274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.116286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.116480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.116492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.116670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.116682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.116772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.116783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.116874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.116884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.117123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.117135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.117278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.117290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.117395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.117407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.117653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.117665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.117732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.117743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.117890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.117902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.118142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.118154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.118309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.388 [2024-07-12 14:32:50.118322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.388 qpair failed and we were unable to recover it. 00:27:58.388 [2024-07-12 14:32:50.118473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.118486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.118656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.118670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.118771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.118783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.118866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.118877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.119040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.119052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.119298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.119310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.119401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.119412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.119495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.119505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.119671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.119683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.119926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.119938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.120028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.120039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.120241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.120253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.120325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.120336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.120445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.120455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.120555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.120567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.120723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.120734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.120813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.120823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.120916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.120926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.121129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.121141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.121299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.121311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.121513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.121526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.121629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.121640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.121811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.121823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.122041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.122052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.122258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.122271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.122488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.122500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.122590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.122600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.122691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.122703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.122848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.122861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.122953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.122964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.123049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.123061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.123242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.123254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.123397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.123409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.389 [2024-07-12 14:32:50.123560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.389 [2024-07-12 14:32:50.123572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.389 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.123645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.123656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.123855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.123867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.124040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.124052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.124211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.124223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.124372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.124389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.124513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.124525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.124660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.124672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.124881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.124894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.125060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.125073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.125271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.125282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.125362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.125372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.125479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.125491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.125655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.125667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.125815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.125827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.125898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.125909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.125996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.126007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.126111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.126121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.126286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.126298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.126388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.126399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.126498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.126511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.126663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.126676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.126780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.126791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.126875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.126886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.127058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.127069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.127296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.127308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.127401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.127413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.127575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.127587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.127736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.127748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.127839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.127850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.127941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.127954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.128180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.128192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.128333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.128345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.128560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.128572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.128661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.128672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.128797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.128809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.128950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.128962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.129107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.129119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.129285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.129296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.390 [2024-07-12 14:32:50.129432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.390 [2024-07-12 14:32:50.129444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.390 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.129551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.129563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.129767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.129778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.129886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.129898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.130072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.130084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.130261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.130274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.130384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.130397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.130503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.130515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.130593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.130604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.130697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.130712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.130890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.130902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.130990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.131002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.131217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.131228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.131332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.131344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.131486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.131499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.131669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.131681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.131754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.131764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.131873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.131884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.131987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.131999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.132153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.132165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.132323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.132335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.132498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.132510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.132667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.132678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.132841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.132853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.132960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.132972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.133196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.133209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.133433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.133445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.133668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.133679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.133826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.133837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.134017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.134029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.134244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.134257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.134521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.134533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.134639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.134651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.134805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.134817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.134906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.134918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.135109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.135123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.135195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.135206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.135392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.135405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.135514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.135526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.135680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.391 [2024-07-12 14:32:50.135692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.391 qpair failed and we were unable to recover it. 00:27:58.391 [2024-07-12 14:32:50.135900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.135912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.136135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.136146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.136321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.136333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.136472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.136486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.136667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.136680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.136890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.136902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.137135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.137148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.137320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.137333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.137558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.137571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.137645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.137659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.137757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.137768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.137971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.137983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.138132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.138144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.138243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.138254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.138341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.138353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.138514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.138526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.138690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.138702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.138796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.138795] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.392 [2024-07-12 14:32:50.138807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.138818] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.392 [2024-07-12 14:32:50.138825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.392 [2024-07-12 14:32:50.138832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.392 [2024-07-12 14:32:50.138837] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.392 [2024-07-12 14:32:50.138922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.138932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.138945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:58.392 [2024-07-12 14:32:50.139122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.139133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.139052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:58.392 [2024-07-12 14:32:50.139158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:58.392 [2024-07-12 14:32:50.139159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:58.392 [2024-07-12 14:32:50.139284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.139295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.139505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.139516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.139663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.139674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.139832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.139844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.139945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.139957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.140137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.140150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.140323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.140335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.140476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.140489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.140572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.140583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.140815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.140827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.140915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.140927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.141085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.141097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.141235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.141248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.141426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.141438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.141534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.141546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.141649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.141661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.392 qpair failed and we were unable to recover it. 00:27:58.392 [2024-07-12 14:32:50.141752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.392 [2024-07-12 14:32:50.141765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.141905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.141918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.142016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.142028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.142119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.142130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.142285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.142297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.142480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.142492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.142626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.142639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.142843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.142854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.142955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.142967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.143048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.143059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.143302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.143323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.143516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.143533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.143635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.143651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.143860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.143877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.144098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.144114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.144347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.144362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.144501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.144518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.144774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.144790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.144962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.144978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.145151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.145167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.145361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.145376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.145592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.145609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.145822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.145838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.145956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.145975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.146210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.146226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.146319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.146333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.146549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.146565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.146778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.146794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.147025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.147042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.147259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.147276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.147368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.147391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.147646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.147662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.147835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.147848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.148052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.148064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.393 qpair failed and we were unable to recover it. 00:27:58.393 [2024-07-12 14:32:50.148278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.393 [2024-07-12 14:32:50.148291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.148489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.148503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.148595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.148608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.148767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.148779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.148934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.148946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.149201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.149215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.149405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.149418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.149523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.149535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.149740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.149754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.149847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.149860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.150000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.150013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.150266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.150279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.150435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.150448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.150654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.150667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.150769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.150782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.150940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.150953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.151158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.151200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.151491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.151512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.151676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.151691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.151923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.151939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.152138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.152154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.152308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.152323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.152557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.152573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.152784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.152799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.152913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.152929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.153105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.153121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.153216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.153231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.153460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.153476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.153639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.153655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.153754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.153769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.153939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.153955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.154045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.154060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.154316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.154331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.154477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.154494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.154586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.154600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.154679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.154694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.154904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.394 [2024-07-12 14:32:50.154920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.394 qpair failed and we were unable to recover it. 00:27:58.394 [2024-07-12 14:32:50.155083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.155098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.155188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.155202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.155346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.155361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.155464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.155477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.155571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.155581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.155730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.155742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.155887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.155899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.156048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.156059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.156308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.156320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.156464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.156476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.156623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.156635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.156821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.156832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.156983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.156995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.157240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.157253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.157503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.157515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.157649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.157660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.157729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.157740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.157979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.157991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.158213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.158225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.158317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.158332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.158553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.158566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.158710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.158722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.158880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.158891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.158967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.158978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.159202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.159215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.159419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.159431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.159632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.159644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.159791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.159803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.159885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.159896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.160045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.160056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.160265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.160278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.160505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.160517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.160748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.160760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.160937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.160950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.161089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.161101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.161185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.161195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.161411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.161424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.161700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.161712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.161863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.161875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.162051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.162063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.395 [2024-07-12 14:32:50.162144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.395 [2024-07-12 14:32:50.162155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.395 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.162408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.162422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.162507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.162518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.162602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.162614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.162773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.162786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.163002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.163015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.163186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.163198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.163409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.163422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.163637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.163651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.163721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.163732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.163884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.163897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.164094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.164107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.164267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.164280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.164493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.164506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.164606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.164618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.164773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.164786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.164988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.165002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.165215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.165228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.165417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.165431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.165575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.165591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.165745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.165757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.165895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.165908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.166146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.166160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.166397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.166411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.166574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.166588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.166772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.166785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.166880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.166892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.167045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.167057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.167138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.167150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.167302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.167315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.167490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.167504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.167731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.167745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.167839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.167853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.167994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.168007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.168211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.168226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.168396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.168410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.168560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.168573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.168778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.168793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.168894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.168907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.169063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.169076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.169300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.169313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.396 [2024-07-12 14:32:50.169423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.396 [2024-07-12 14:32:50.169437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.396 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.169590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.169605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.169859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.169874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.170020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.170034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.170127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.170140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.170282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.170294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.170454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.170468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.170559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.170572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.170780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.170795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.170967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.170980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.171128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.171141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.171345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.171359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.171603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.171618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.171792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.171804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.171967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.171980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.172119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.172132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.172363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.172382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.172563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.172575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.172743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.172760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.172931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.172943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.173178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.173192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.173419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.173433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.173590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.173604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.173749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.173761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.173897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.173909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.174131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.174144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.174347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.174359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.174545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.174558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.174659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.174671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.174818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.174831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.175071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.175084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.175334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.175347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.175511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.175524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.175671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.175683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.175831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.175844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.176067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.176080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.176295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.176308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.176402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.176412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.176583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.176595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.176835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.176847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.176947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.176959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.397 qpair failed and we were unable to recover it. 00:27:58.397 [2024-07-12 14:32:50.177119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.397 [2024-07-12 14:32:50.177131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.177232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.177244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.177324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.177334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.177578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.177591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.177729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.177741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.177943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.177955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.178037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.178048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.178252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.178264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.178417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.178429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.178633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.178645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.178879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.178891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.179051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.179062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.179284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.179296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.179489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.179501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.179574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.179585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.179758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.179770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.179871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.179883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.180063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.180078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.180180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.180192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.180348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.180361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.180525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.180538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.180698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.180711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.180911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.180923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.181116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.181128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.181332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.181344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.181486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.181499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.181634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.181646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.181871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.181883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.398 [2024-07-12 14:32:50.181977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.398 [2024-07-12 14:32:50.181988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.398 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.182163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.182175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.182384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.182396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.182487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.182498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.182654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.182666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.182765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.182776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.182922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.182934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.183119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.183131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.183201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.183213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.183353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.183365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.183442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.183454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.183704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.183717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.183851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.183862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.184074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.184086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.184227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.184239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.184493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.184506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.184761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.184773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.184998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.185011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.185095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.185105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.185251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.185263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.185368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.185391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.185534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.185546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.185746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.185757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.185952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.185964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.186181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.186193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.186343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.186356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.186506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.186518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.186722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.186735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.186879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.186891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.187094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.187109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.187286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.187298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.187532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.187545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.187798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.187811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.187965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.187977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.188180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.188193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.399 [2024-07-12 14:32:50.188331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.399 [2024-07-12 14:32:50.188344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.399 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.188598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.188610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.188842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.188854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.189104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.189116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.189326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.189338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.189484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.189495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.189722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.189734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.189995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.190007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.190149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.190161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.190322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.190335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.190479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.190492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.190656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.190667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.190875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.190886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.190985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.190995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.191148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.191160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.191314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.191326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.191469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.191482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.191733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.191745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.191884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.191895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.192033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.192045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.192307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.192318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.192463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.192475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.192563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.192573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.192799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.192811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.192894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.192904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.193126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.193138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.193282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.193293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.193431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.193442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.193670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.193683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.193790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.193802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.193938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.400 [2024-07-12 14:32:50.193950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.400 qpair failed and we were unable to recover it. 00:27:58.400 [2024-07-12 14:32:50.194190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.194201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.194424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.194436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.194538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.194550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.194704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.194718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.194803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.194814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.194963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.194975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.195120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.195132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.195276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.195288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.195510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.195522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.195722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.195741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.195845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.195856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.196086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.196098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.196184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.196195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.196349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.196361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.196603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.196615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.196853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.196865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.197116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.197128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.197303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.197315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.197466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.197479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.197578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.197589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.197749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.197761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.197896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.197908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.198043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.198055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.198283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.198295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.198439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.198452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.198609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.198621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.198705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.198717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.198956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.198968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.199188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.199202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.199415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.199428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.401 [2024-07-12 14:32:50.199610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.401 [2024-07-12 14:32:50.199624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.401 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.199825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.199838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.200050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.200062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.200196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.200207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.200289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.200300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.200452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.200465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.200681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.200693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.200821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.200833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.200921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.200932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.201087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.201099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.201233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.201244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.201346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.201358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.201457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.201468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.201668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.201682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.201768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.201779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.201913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.201925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.202196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.202209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.202410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.202422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.202601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.202613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.202696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.202707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.202863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.202875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.203081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.203094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.203194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.203206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.203388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.203400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.203626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.203638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.203740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.203752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.203919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.203932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.204132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.204144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.204309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.204322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.204423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.204434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.204602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.204614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.204858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.204870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.402 [2024-07-12 14:32:50.205141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.402 [2024-07-12 14:32:50.205153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.402 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.205407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.205419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.205519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.205530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.205681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.205693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.205828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.205840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.206041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.206053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.206201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.206214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.206350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.206362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.206507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.206546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.206753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.206777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.207019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.207042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.207227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.207240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.207452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.207465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.207674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.207687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.207822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.207835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.208014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.208025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.208253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.208265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.208446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.208458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.208561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.208573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.208738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.208750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.208839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.208850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.208997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.209011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.209236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.403 [2024-07-12 14:32:50.209248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.403 qpair failed and we were unable to recover it. 00:27:58.403 [2024-07-12 14:32:50.209399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.209411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.209500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.209510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.209743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.209755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.209898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.209909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.210140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.210152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.210381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.210393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.210477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.210487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.210638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.210650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.210803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.210815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.211063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.211074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.211295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.211306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.211457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.211470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.211609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.211620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.211840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.211852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.212006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.212018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.212284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.212295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.212449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.212461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.212560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.212572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.212770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.212781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.212954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.212966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.213102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.213114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.213316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.213327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.213475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.213488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.213690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.213701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.213857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.213869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.214009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.214020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.214154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.214167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.214367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.214394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.214535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.214546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.214690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.214702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.214788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.214799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.214882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.214892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.215102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.215115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.215269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.215281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.215356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.215366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.404 qpair failed and we were unable to recover it. 00:27:58.404 [2024-07-12 14:32:50.215478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.404 [2024-07-12 14:32:50.215489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.215626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.215639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.215784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.215796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.215896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.215910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.216067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.216079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.216237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.216249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.216332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.216343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.216530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.216542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.216627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.216638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.216740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.216751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.216899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.216911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.217045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.217057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.217148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.217158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.217311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.217323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.217542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.217555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.217698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.217711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.217818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.217830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.217928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.217939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.218148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.218160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.218262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.218274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.218410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.218423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.218621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.218633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.218802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.218814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.219035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.219046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.219199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.219211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.219369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.219393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.219572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.219584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.219718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.219730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.219952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.219964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.405 [2024-07-12 14:32:50.220112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.405 [2024-07-12 14:32:50.220124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.405 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.220353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.220375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.220601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.220618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.220803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.220818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.220972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.220988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.221093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.221110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.221266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.221281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.221428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.221444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.221598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.221613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.221820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.221835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.222041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.222057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.222228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.222243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.222348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.222364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.222583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.222597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.222683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.222697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.222850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.222862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.222970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.222981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.223148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.223160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.223322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.223334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.223500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.223512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.223667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.223679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.223831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.223843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.224074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.224085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.224300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.224312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.224539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.224552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.224708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.224719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.224868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.224879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.225122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.225133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.225347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.225358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.225539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.225552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.225754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.406 [2024-07-12 14:32:50.225765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.406 qpair failed and we were unable to recover it. 00:27:58.406 [2024-07-12 14:32:50.225901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.225913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.226064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.226077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.226292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.226304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.226530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.226542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.226722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.226734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.226934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.226946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.227091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.227103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.227351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.227363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.227517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.227529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.227754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.227766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.227936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.227958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.228189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.228205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.228448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.228465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.228608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.228623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.228856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.228872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.229056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.229071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.229234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.229250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.229510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.229526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.229634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.229650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.229764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.229779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.229941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.229956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.230158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.230173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.230269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.230282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.230385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.230406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.230588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.230604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.230767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.230783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.230963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.230978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.231136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.231151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.231361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.231380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.231617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.407 [2024-07-12 14:32:50.231632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.407 qpair failed and we were unable to recover it. 00:27:58.407 [2024-07-12 14:32:50.231891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.231906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.232143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.232158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.232312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.232327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.232559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.232575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.232741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.232756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.232863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.232879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.232986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.233001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.233167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.233182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.233282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.233297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.233530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.233546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.233697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.233712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.233921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.233936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.234098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.234113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.234330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.234345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.234431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.234445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.234602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.234618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.234724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.234737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.234882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.234897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.235109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.235124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.235286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.235301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.235550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.235572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.235812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.235828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.236088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.236105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.236273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.236289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.236471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.236488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.236720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.236736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.236899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.236915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.237081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.237097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.237250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.237265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.237515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.237532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.237768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.237784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.237955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.237972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.238128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.238144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.238306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.238321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.238489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.238506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.238673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.238688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.238795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.408 [2024-07-12 14:32:50.238812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.408 qpair failed and we were unable to recover it. 00:27:58.408 [2024-07-12 14:32:50.239069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.239083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.239247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.239260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.239422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.239434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.239603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.239615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.239829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.239841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.240004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.240016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.240101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.240112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.240262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.240273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.240486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.240498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.240726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.240738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.240887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.240904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.241089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.241105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.241311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.241326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.241567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.241583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.241741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.241756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.241985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.242000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.242144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.242160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.242387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.242403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.242585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.242600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.242756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.242771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.242942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.242957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.243173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.243188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.243333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.243346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.243517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.243531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.243716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.243727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.243880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.243891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.244046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.244058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.244261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.244273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.244429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.244441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.244536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.409 [2024-07-12 14:32:50.244547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.409 qpair failed and we were unable to recover it. 00:27:58.409 [2024-07-12 14:32:50.244643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.244654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.244746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.244756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.244841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.244851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.244929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.244940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.245093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.245105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.245238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.245250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.245417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.245429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.245565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.245578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.245668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.245678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.245897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.245909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.246013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.246024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.246169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.246181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.246259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.246269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.246424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.246436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.246582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.246593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.246672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.246683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.246839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.246851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.246939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.246949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.247041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.247051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.247191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.247204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.247354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.247373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.247557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.247573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.247741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.247756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.247838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.247851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.248058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.248073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.248231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.248247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.248409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.248426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.248523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.248537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.248627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.248642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.248890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.248903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.248986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.248997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.249136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.249146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.249348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.249360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.249446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.249459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.410 [2024-07-12 14:32:50.249539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.410 [2024-07-12 14:32:50.249549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.410 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.249628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.249638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.249718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.249729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.249807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.249818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.250016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.250028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.250164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.250176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.250381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.250394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.250481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.250492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.250568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.250579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.250799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.250811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.250943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.250955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.251099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.251111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.251192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.251202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.251271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.251281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.251353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.251364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.251503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.251514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.251666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.251678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.251759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.251770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.251912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.251924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.252024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.252036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.252108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.252119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.252214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.252225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.252446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.252458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.252601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.252614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.252747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.252758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.252816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.252827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.253011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.253034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.253132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.253149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.253229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.253243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.253324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.253338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.253498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.253514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.253695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.253711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.411 qpair failed and we were unable to recover it. 00:27:58.411 [2024-07-12 14:32:50.253855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.411 [2024-07-12 14:32:50.253870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.253946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.253960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.254106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.254122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.254263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.254278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.254373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.254396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.254485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.254500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.254659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.254674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.254751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.254764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.254927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.254943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.255036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.255051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.255155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.255169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.255389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.255405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.255548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.255563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.255825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.255840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.256079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.256095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.256181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.256197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.256344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.256360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.256466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.256481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.256664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.256680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.256836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.256851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.256943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.256958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.257056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.257071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.257236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.257251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.257412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.257427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.257530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.257545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.257786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.257801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.257879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.257893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.257966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.257980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.258076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.258092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.258191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.258206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.258294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.412 [2024-07-12 14:32:50.258310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.412 qpair failed and we were unable to recover it. 00:27:58.412 [2024-07-12 14:32:50.258453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.258469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.258630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.258646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.258723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.258737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.258898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.258918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.259071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.259086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.259190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.259206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.259392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.259408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.259514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.259530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.259682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.259697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.259930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.259945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.260036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.260051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.260193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.260207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.260361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.260373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.260580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.260592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.260742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.260754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.260845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.260857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.261044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.261055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.261213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.261225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.261372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.261394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.261496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.261507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.261645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.261657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.261735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.261746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.261852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.261863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.261958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.261970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.262116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.262128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.262265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.262276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.262444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.262456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.262535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.262545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.262700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.262712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.262857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.413 [2024-07-12 14:32:50.262868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.413 qpair failed and we were unable to recover it. 00:27:58.413 [2024-07-12 14:32:50.263036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.263047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.263211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.263223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.263361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.263373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.263454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.263465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.263548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.263565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.263795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.263806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.263952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.263963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.264111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.264122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.264257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.264268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.264349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.264361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.264454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.264466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.264704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.264715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.264798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.264808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.265033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.265047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.265255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.265267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.265402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.265414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.265562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.265574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.265660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.265672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.265828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.265839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.265925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.265937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.266058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.266070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.266161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.266173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.414 [2024-07-12 14:32:50.266314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.414 [2024-07-12 14:32:50.266326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.414 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.266425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.266437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.266639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.266651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.266727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.266737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.266824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.266837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.266928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.266940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.267094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.267105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.267173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.267184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.267350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.267362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.267517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.267529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.267755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.267767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.267884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.267895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.267982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.267994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.268138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.268149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.268246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.268258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.268355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.268366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.268550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.268568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.268660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.268676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.268846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.268861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.268958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.268974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.269126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.269142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.269300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.269315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.269397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.269409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.269498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.269509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.269594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.269606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.269833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.269844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.269910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.269921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.269981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.269991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.270125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.270136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.270215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.270227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.270300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.270311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.270517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.270531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.270615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.270627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.270714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.270726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.270799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.270809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.270953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.270965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.271055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.271067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.271154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.271165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.271301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.415 [2024-07-12 14:32:50.271312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.415 qpair failed and we were unable to recover it. 00:27:58.415 [2024-07-12 14:32:50.271406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.271418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.271662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.271673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.271807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.271819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.271875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.271885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.271950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.271961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.272105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.272117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.272253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.272265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.272416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.272428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.272575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.272587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.272668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.272679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.272757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.272768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.272913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.272925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.273106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.273117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.273345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.273356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.273495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.273507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.273605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.273617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.273767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.273778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.273868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.273879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.274111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.274122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.274205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.274217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.274352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.274363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.274438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.274450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.274521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.274533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.274667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.274679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.274823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.274834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.274965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.274976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.275143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.275155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.275359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.275370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.275463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.275475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.275618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.275629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.275806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.275817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.276024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.276035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.276128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.276141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.276228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.276239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.276354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.276366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.276501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.276513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.276595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.276607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.276773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.276784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.276859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.276869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.416 [2024-07-12 14:32:50.277021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.416 [2024-07-12 14:32:50.277033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.416 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.277189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.277200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.277347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.277358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.277519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.277532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.277615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.277627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.277778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.277789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.277944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.277956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.278102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.278114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.278199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.278210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.278271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.278281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.278499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.278512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.278626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.278638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.278861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.278872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.279007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.279019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.279091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.279102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.279198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.279210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.279362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.279374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.279526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.279538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.279629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.279640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.279842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.279854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.279934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.279945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.280081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.280093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.280308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.280320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.280386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.280397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.280558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.280570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.280715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.280727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.280859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.280871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.281005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.281016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.281108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.281119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.281209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.281221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.281302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.281313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.281531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.281542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.281637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.281648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.281781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.281795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.281937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.281949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.282029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.282041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.282126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.282137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.282273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.282284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.282359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.282371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.282462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.282474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.282543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.282553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.282642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.282654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.282878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.282889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.417 [2024-07-12 14:32:50.283029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.417 [2024-07-12 14:32:50.283041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.417 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.283128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.283139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.283225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.283237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.283305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.283315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.283408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.283420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.283604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.283615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.283691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.283703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.283838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.283850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.283936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.283947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.284151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.284162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.284300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.284312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.284449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.284461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.284546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.284557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.284661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.284672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.284807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.284818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.284994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.285005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.285169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.285182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.285325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.285337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.285411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.285421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.285488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.285500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.285593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.285605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.285680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.285691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.285781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.285792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.285873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.285885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.285973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.285985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.286069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.286081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.286142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.286152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.286291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.286302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.286396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.286408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.286575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.286586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.286671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.286686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.286780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.286791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.286861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.286872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.287046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.287058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.287119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.287130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.287214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.287224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.287305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.287317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.287454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.287465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.287537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.287548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.287630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.287641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.287739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.287750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.287885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.287897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.287966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.287977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.288044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.418 [2024-07-12 14:32:50.288056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.418 qpair failed and we were unable to recover it. 00:27:58.418 [2024-07-12 14:32:50.288136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.288148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.288300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.288312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.288537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.288550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.288638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.288650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.288792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.288804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.288880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.288891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.289032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.289044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.289220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.289232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.289305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.289315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.289395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.289407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.289503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.289515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.289651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.289663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.289748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.289759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.289908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.289920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.290101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.290112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.290254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.290266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.290407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.290419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.290515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.290526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.290662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.290673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.290909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.290921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.291085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.291097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.291173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.291185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.291275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.291287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.291446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.291458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.291597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.291609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.291762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.291774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.291915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.291928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.291989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.291999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.292152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.292164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.292296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.292307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.292392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.292404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.292606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.292618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.292764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.292776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.292851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.292863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.292999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.293010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.293164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.293176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.293395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.293407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.293464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.293475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.419 qpair failed and we were unable to recover it. 00:27:58.419 [2024-07-12 14:32:50.293653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.419 [2024-07-12 14:32:50.293664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.293746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.293757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.293978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.293989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.294072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.294083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.294239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.294250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.294337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.294349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.294503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.294515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.294667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.294678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.294762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.294774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.294993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.295005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.295085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.295097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.295254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.295266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.295349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.295360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.295439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.295452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.295544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.295556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.295708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.295726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.295883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.295899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.295998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.296013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.296172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.296188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.296399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.296414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.296508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.296523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.296608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.296623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.296711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.296726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.296814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.296829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.297068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.297083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.297162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.297178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.297292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.297308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.297385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.297398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.297467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.297481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.297652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.297664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.297733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.297744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.297811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.297821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.297962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.297974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.298109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.298120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.298192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.298203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.298274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.298285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.298436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.298448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.298549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.298560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.298694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.298706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.298840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.298852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.298910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.298920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.299104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.299115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.299266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.299278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.299417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.299430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.299498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.299509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.299711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.299722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.299813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.420 [2024-07-12 14:32:50.299823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.420 qpair failed and we were unable to recover it. 00:27:58.420 [2024-07-12 14:32:50.299900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.299910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.300060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.300071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.300157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.300167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.300234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.300244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.300447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.300459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.300612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.300624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.300761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.300773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.300842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.300852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.300965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.300982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.301063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.301079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.301173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.301188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.301326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.301342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.301426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.301441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.301667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.301682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.301892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.301907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.302001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.302017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.302108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.302124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.302333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.302348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.302454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.302470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.302631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.302646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.302742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.302758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.302865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.302883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.302973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.302988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.303083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.303099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.303189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.303202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.303337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.303348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.303496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.303508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.303573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.303584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.303664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.303675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.303836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.303849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.303917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.303928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.304015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.304027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.304099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.304109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.304187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.304199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.304266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.304277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.304449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.304461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.304545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.304558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.304649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.304661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.304756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.304768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.304915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.304927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.305007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.305019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.305154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.305165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.305246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.305258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.305349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.305360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.305453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.305465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.305628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.305639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.305722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.305734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.421 [2024-07-12 14:32:50.305805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.421 [2024-07-12 14:32:50.305817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.421 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.305899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.305916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.306081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.306097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.306252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.306267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.306425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.306440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.306533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.306548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.306642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.306658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.306739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.306754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.306851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.306866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.307122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.307137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.307231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.307246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.307397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.307414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.307521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.307536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.307679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.307695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.307849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.307868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.307971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.307986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.308060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.308076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.308232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.308249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.308339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.308354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.308445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.308461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.308611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.308626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.308727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.308740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.308820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.308832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.308993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.309005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.309094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.309105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.309183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.309194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.309342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.309353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.309558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.309570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.309663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.309675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.309841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.309852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.309932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.309945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.310088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.310099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.310177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.310190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.310266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.310278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.310419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.310431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.310568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.310579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.310737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.310749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.310838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.310850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.310920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.310932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.311075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.311087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.311164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.311175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.311336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.311353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.311452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.422 [2024-07-12 14:32:50.311468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.422 qpair failed and we were unable to recover it. 00:27:58.422 [2024-07-12 14:32:50.311567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.311582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.311659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.311675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.311784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.311800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.311893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.311908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.312050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.312065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.312163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.312178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.312393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.312409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.312558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.312573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.312676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.312692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.312875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.312890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.312982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.312995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.313154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.313167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.313313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.313325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.313399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.313410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.313495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.313507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.313578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.313590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.313674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.313686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.313764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.313776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.313858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.313870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.314102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.314113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.314259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.314271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.314348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.314360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.314455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.314467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.314546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.314557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.314707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.314718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.314813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.314825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.314953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.314965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.315047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.315058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.315281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.315292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.315374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.315398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.315470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.315482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.315622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.315634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.315835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.315847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.316057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.316069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.316139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.316151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.316302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.316313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.316453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.316466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.316561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.316573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.316676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.316694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.316784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.316800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.316942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.316957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.317048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.317063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.317232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.317247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.317348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.317363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.317514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.317530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.317616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.317632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.423 qpair failed and we were unable to recover it. 00:27:58.423 [2024-07-12 14:32:50.317714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.423 [2024-07-12 14:32:50.317729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.317806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.317821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.317964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.317979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.318137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.318152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.318239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.318252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.318338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.318352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.318434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.318446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.318517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.318529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.318744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.318756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.318951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.318962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.319104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.319116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.319369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.319385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.319524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.319535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.319689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.319701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.319829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.319840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.319931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.319943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.320030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.320042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.320118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.320131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.320201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.320211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.320351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.320363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.320508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.320520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.320590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.320601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.320689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.320700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.320784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.320796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.320857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.320868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.320946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.320957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.321037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.321049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.321137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.321149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.321290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.321302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.321370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.321387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.321459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.321471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.321600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.321611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.321697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.321710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.321784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.321796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.321870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.321882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.321968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.321979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.322062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.322073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.322206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.322218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.322284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.322294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.322383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.322396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.322482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.322494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.322574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.322585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.322672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.322683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.322759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.322771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.322851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.322862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.322999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.323012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.323148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.323160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.323232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.323244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.323391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.323404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.323481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.323494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.323563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.323574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.323641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.323653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.323744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.323757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.323828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.323840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.323919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.323929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.324064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.324076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.324208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.324220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.324301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.324314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.324453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.324465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.324544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.324555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.324631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.324643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.324794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.324806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.324955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.324967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.325102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.325114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.325265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.325277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.325338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.325349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.325422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.325434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.424 [2024-07-12 14:32:50.325507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.424 [2024-07-12 14:32:50.325518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.424 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.325671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.325684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.325817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.325828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.325898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.325909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.325977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.325989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.326067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.326080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.326285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.326297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.326363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.326373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.326447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.326459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.326536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.326547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.326615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.326627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.326775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.326787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.327004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.327015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.327094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.327106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.327232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.327244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.327331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.327343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.327479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.327491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.327560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.327572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.327710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.327721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.327925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.327936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.328138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.328150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.328291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.328303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.328453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.328465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.328550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.328562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.328698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.328711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.328792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.328803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.328892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.328904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.328971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.328982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.329072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.329085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.329154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.329164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.329255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.329266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.329444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.329456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.329545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.329557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.329629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.329641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.329719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.329730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.329864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.329876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.330032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.330044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.330180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.330191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.330263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.330275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.330346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.330357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.330443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.330455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.330545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.330557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.330734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.330745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.330879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.330890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.331032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.331044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.331114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.331127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.331211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.331222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.331373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.331389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.331454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.331464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.331544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.331555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.331626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.331638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.331699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.331710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.331795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.331806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.331883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.331895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.331975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.331988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.332071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.332082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.332152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.332164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.332298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.332310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.425 [2024-07-12 14:32:50.332387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.425 [2024-07-12 14:32:50.332399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.425 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.332490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.332502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.332649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.332661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.332734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.332746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.332883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.332894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.332962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.332974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.333043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.333054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.333124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.333136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.333368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.333395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.333597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.333608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.333754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.333766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.333840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.333851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.334003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.334014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.334095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.334107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.334177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.334188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.334260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.334272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.334358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.334370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.334513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.334525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.334680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.334691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.334773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.334785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.334869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.334881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.335015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.335026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.335129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.335141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.335237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.335248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.335408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.335420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.335497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.335509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.335732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.335743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.335818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.335831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.335912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.335924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.336006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.336017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.336116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.336128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.336219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.336230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.336368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.336386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.336464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.336479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.336567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.336579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.336717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.336729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.336805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.336817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.336953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.336965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.337043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.337054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.337138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.337151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.337299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.337311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.337387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.337399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.337491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.337503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.337604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.337618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.337698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.337709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.337849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.337860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.337948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.337959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.338039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.338051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.338206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.338219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.338350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.338362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.338455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.338467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.338630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.338641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.338719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.338730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.338807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.338818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.338950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.338962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.339091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.339103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.339248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.339260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.339400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.339411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.339494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.339506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.339580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.339592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.339727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.339739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.339836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.339848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.339916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.339927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.340067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.340078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.340148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.340159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.340297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.340310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.340460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.340472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.340557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.340573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.426 qpair failed and we were unable to recover it. 00:27:58.426 [2024-07-12 14:32:50.340644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.426 [2024-07-12 14:32:50.340655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.340793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.340805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.340891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.340902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.341126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.341138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.341210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.341223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.341369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.341393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.341478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.341490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.341693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.341705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.341841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.341853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.341922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.341933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.342010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.342022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.342090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.342101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.342237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.342248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.342325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.342336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.342471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.342482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.342707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.342720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.342920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.342931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.343010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.343021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.343074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.343085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.343152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.343163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.343301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.343313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.343575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.343588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.343742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.343754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.343893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.343905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.344075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.344087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.344174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.344186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.344328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.344341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.344481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.344493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.344579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.344592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.344729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.344741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.344878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.344891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.345021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.345033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.345121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.345133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.345338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.345351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.345444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.345455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.345596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.345608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.345815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.345826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.345904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.345917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.345996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.346007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.346153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.346167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.346307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.346319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.346410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.346423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.346572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.346584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.346661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.346673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.346743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.346754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.346901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.346913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.346980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.346992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.347069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.347080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.347160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.347172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.347257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.347269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.347444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.347456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.347588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.347606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.347672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.347684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.347821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.347832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.347969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.347981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.348115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.348127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.348330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.348342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.348496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.348507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.348592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.348603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.348661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.348674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.348742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.348753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.348848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.348860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.348941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.348952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.349003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.349014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.349089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.349101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.349305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.349317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.349394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.349406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.349615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.349627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.427 qpair failed and we were unable to recover it. 00:27:58.427 [2024-07-12 14:32:50.349765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.427 [2024-07-12 14:32:50.349777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.349861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.349872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.349950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.349962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.350102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.350114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.350180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.350191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.350443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.350455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.350593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.350605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.350763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.350775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.350917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.350929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.351078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.351090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.351292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.351304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.351389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.351403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.351496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.351508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.351655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.351667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.351815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.351827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.351926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.351938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.351998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.352009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.352089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.352101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.352331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.352342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.352419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.352432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.352525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.352537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.352703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.352714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.352844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.352856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.352925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.352937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.353038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.353050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.353186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.353198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.353269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.353282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.353373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.353388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.353524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.353537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.353740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.353752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.353829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.353841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.353990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.354002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.354150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.354162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.354243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.354255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.354335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.354347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.354433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.354445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.354614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.354625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.354725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.354736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.354888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.354900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.354984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.354995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.355081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.355093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.355156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.355166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.355236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.355248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.355386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.355398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.355477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.355488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.355550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.355562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.355696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.355708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.355857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.355869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.355933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.355945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.356033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.356046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.356177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.356189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.356327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.356341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.356490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.356502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.356714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.356726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.356805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.356817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.356960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.356972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.357041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.357053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.357226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.357237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.357373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.357406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.357561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.357574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.357709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.357720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.428 [2024-07-12 14:32:50.357872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.428 [2024-07-12 14:32:50.357885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.428 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.357975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.357987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.358126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.358137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.358303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.358314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.358468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.358480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.358712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.358725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.358807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.358818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.358900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.358912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.358999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.359011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.359172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.359184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.359279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.359290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.359385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.359398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.359490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.359503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.359581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.359592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.359648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.359660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.359800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.359812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.359955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.359966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.360046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.360058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.360156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.360168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.360236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.360248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.360320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.360332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.360393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.360405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.360486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.360497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.360586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.360599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.360667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.360679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.360775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.360788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.360839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.360850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.360979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.360991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.361074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.361085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.361154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.361166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.361317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.361331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.361407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.361419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.361512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.361524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.361598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.361610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.361746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.361757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.361827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.361839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.361984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.361995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.362084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.362096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.362167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.362178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.362248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.362260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.362331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.362344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.362516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.362529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.362661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.362673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.362737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.362749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.362876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.362888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.363118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.363129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.363288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.363299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.363434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.363447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.363535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.363547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.363684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.363696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.363944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.363955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.364089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.364101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.364197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.364210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.364415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.364428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.364582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.364594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.364685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.364697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.364839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.364851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.364934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.364946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.429 qpair failed and we were unable to recover it. 00:27:58.429 [2024-07-12 14:32:50.365039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.429 [2024-07-12 14:32:50.365050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.365139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.365151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.365253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.365264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.365423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.365436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.365525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.365537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.365740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.365752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.365895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.365907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.365982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.365994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.366067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.366078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.366214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.366226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.366302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.366314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.366402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.366414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.366551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.366567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.366777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.366789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.366857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.366868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.367010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.367022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.367173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.367185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.367251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.367264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.367332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.367345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.367479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.367491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.367559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.367570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.367727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.367738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.367879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.367890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.368045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.712 [2024-07-12 14:32:50.368057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.712 qpair failed and we were unable to recover it. 00:27:58.712 [2024-07-12 14:32:50.368206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.368217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.368422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.368434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.368590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.368601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.368797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.368809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.368882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.368894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.369029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.369040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.369116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.369128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.369223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.369236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.369321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.369333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.369440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.369452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.369606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.369618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.369847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.369859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.369921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.369933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.370018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.370030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.370119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.370131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.370257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.370270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.370340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.370352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.370531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.370543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.370625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.370636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.370701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.370712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.370848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.370860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.371005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.371017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.371171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.371182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.371385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.371398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.371480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.371491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.371579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.371591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.371740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.371751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.371850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.371862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.372001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.372014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.372110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.372122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.372204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.372216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.372295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.372306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.372444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.372458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.372537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.713 [2024-07-12 14:32:50.372549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.713 qpair failed and we were unable to recover it. 00:27:58.713 [2024-07-12 14:32:50.372626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.372639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.372797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.372810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.372946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.372958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.373033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.373045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.373128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.373139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.373222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.373234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.373306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.373317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.373458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.373471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.373607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.373619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.373774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.373786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.373867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.373877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.373961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.373973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.374069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.374081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.374221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.374234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.374319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.374330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.374486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.374499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.374638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.374649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.374750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.374762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.374850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.374861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.374932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.374943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.375025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.375038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 A controller has encountered a failure and is being reset. 00:27:58.714 [2024-07-12 14:32:50.375148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.375167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.375246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.375261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.375370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.375388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.375471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.375485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.375579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.375594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.375669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.375684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.375829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.375845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.375948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.375964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.376146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.376161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.376251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.376266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.376368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.376386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.376487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.376503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.376584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.376599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.376680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.376698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.376791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.376807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.376970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.376985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.714 [2024-07-12 14:32:50.377216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.714 [2024-07-12 14:32:50.377231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.714 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.377324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.377340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.377416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.377433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.377575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.377590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.377679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.377695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.377841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.377857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.378012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.378027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.378175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.378188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.378273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.378285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.378365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.378381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.378476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.378487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.378578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.378591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.378727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.378738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.378810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.378821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.378889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.378900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.378971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.378982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.379078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.379089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.379226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.379237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.379318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.379331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.379402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.379415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.379492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.379504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.379590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.379602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.379694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.379705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.379782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.379793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.379883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.379895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.380033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.380045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.380183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.380194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.380347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.380359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.380448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.380460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.380529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.380541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.380615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.380626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.380714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.380725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.380797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.380809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.380875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.380886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.381028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.381039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.381124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.381135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.715 [2024-07-12 14:32:50.381212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.715 [2024-07-12 14:32:50.381224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.715 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.381386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.381401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.381566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.381577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.381722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.381733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.381885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.381897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.381990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.382001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.382072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.382085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.382170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.382181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.382272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.382283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.382452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.382465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.382631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.382643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.382740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.382751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.382885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.382897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.383046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.383057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.383123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.383136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.383288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.383299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.383441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.383454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.383581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.383593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.383741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.383754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.383966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.383978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.384074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.384086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.384236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.384248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.384455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.384467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.384602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.384613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.384699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.384711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.384793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.384805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.384884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.384896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.385029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.385041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.385177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.385190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.385265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.385277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.385417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.385430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.385578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.385589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.385694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.385706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.716 qpair failed and we were unable to recover it. 00:27:58.716 [2024-07-12 14:32:50.385856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.716 [2024-07-12 14:32:50.385867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.386046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.386058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.386195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.386207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.386300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.386312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.386519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.386532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.386617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.386629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.386759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.386771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.387009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.387020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.387228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.387241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.387325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.387336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.387431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.387442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.387510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.387522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.387636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.387647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.387847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.387859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.388008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.388019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.388150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.388161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.388244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.388256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.388344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.388356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.388448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.388459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.388596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.388608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.388689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.388701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.388835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.388847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.388912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.388923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.388992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.389004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.389143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.389155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.389236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.389247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.389336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.389348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.389441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.389453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.389531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.717 [2024-07-12 14:32:50.389543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.717 qpair failed and we were unable to recover it. 00:27:58.717 [2024-07-12 14:32:50.389692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.389703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.389784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.389797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.390027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.390039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.390138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.390149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.390282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.390294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.390384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.390395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.390478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.390490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.390628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.390641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.390780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.390791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.390940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.390951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.391029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.391041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.391192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.391204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.391274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.391285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.391439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.391452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.391531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.391543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.391681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.391694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.391907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.391919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.392001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.392014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.392171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.392183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.392262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.392275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.392355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.392367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.392552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.392570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.392781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.392797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.392890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.392906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.392996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.393012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.393094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.393109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.393207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.393223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.393446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.393462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.393552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.393567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.393653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.393669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.393764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.393780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.393937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.393953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.394047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.394064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.394161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.394176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.394326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.394341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.394515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.718 [2024-07-12 14:32:50.394530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.718 qpair failed and we were unable to recover it. 00:27:58.718 [2024-07-12 14:32:50.394707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.394722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.394804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.394820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.394986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.395001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.395089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.395104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.395278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.395293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.395447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.395462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.395623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.395639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.395735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.395750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.395836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.395851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.396009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.396024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.396198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.396214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.396318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.396333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.396492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.396508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.396657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.396672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.396815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.396830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.396984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.396999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.397112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.397127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.397225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.397240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.397330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.397345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.397436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.397451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.397543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.397559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.397669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.397685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.397848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.397864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.397964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.397982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.398145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.398161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.398322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.398338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.398419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.398435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.398517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.398533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.398686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.398701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.398855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.398870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.398967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.398983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.399076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.399092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.399269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.399284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.399364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.399383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.399498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.399514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.399624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.399639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.399786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.399802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.719 qpair failed and we were unable to recover it. 00:27:58.719 [2024-07-12 14:32:50.399901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.719 [2024-07-12 14:32:50.399916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.400013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.400028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.400214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.400229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.400391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.400406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.400590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.400605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.400685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.400701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.400871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.400886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.401141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.401156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.401247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.401262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.401434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.401450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.401542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.401557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.401646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.401661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.401768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.401783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.401958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.401985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217ed0 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.402092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.402122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9254000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.402396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.402409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.402557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.402569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.402720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.402733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.402968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.402979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.403066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.403078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.403155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.403167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.403238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.403249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.403452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.403464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.403560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.403572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.403729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.403741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.403891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.403903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.403987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.404002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.404157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.404169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.404245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.404257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.404323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.404334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.404552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.404564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.404648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.404659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.404888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.404899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.405045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.405057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.405157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.405168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.405252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.720 [2024-07-12 14:32:50.405264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.720 qpair failed and we were unable to recover it. 00:27:58.720 [2024-07-12 14:32:50.405465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.405477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.405565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.405576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.405712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.405724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.405882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.405894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.405975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.405987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.406066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.406077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.406177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.406189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.406281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.406292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.406362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.406373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.406528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.406540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.406695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.406707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.406794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.406806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.406890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.406902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.406993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.407005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.407144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.407155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.407234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.407246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.407494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.407507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.407606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.407623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.407792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.407808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.407950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.407966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.408067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.408082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.408261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.408276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.408439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.408454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.408612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.408628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.408788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.408803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.409041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.409056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.409215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.409231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.409443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.409459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.409538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.409553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.409710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.409725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.409815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.409833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.409906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.409921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.410073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.410089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.410260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.721 [2024-07-12 14:32:50.410275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.721 qpair failed and we were unable to recover it. 00:27:58.721 [2024-07-12 14:32:50.410482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.410497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.410645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.410660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.410834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.410849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.410939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.410954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.411048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.411064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.411291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.411306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.411497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.411512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.411592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.411607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.411692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.411707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.411890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.411905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.412052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.412068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.412280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.412295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.412404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.412420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.412564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.412580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.412737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.412753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.412903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.412918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.413080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.413095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.413250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.413265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.413355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.413370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.413486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.413502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.413690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.413705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.413865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.413881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.413988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.414003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9244000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.414160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.414174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.414257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.414269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.414357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.414368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.414526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.414538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.414627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.414638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.414787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.414799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.414881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.414893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.414970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.414982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.415134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.415145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.722 qpair failed and we were unable to recover it. 00:27:58.722 [2024-07-12 14:32:50.415224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.722 [2024-07-12 14:32:50.415235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.415313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.415325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.415462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.415475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.415620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.415632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.415776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.415790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.415864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.415876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.416028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.416039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.416198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.416210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.416346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.416357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.416462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.416474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.416620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.416632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.416730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.416741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.416821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.416832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.416990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.417002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.417153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.417165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.417295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.417306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.417395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.417406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.417494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.417506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.417583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.417594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.417727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.417740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.417873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.417884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.417958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.417970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.418126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.418138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.418271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.418283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.418434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.418446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.418595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.418607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.418698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.418710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.418846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.418858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.419060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.419072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.419286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.419297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.419382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.419394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.419477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.419489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.419637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.419648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.419723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.419734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.419816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.419829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.419969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.419980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.420120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.420132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.420263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.420275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.420361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.420373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.723 [2024-07-12 14:32:50.420524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.723 [2024-07-12 14:32:50.420536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.723 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.420679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.420690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.420756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.420767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.420853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.420866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.420950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.420961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.421037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.421049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.421202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.421213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.421296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.421307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.421374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.421396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.421533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.421545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.421636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.421648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.421717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.421729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.421808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.421819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.421898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.421909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.422037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.422048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.422193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.422206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.422283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.422294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.422386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.422398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.422473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.422485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.422567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.422578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.422658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.422670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.422751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.422763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.422832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.422845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.422922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.422933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.423088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.423100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.423190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.423202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.423347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.423358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.423431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.423443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.423672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.423684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.423780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.423792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.423872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.423884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.423965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.423976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.424045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.424058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.424144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.424156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.424235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.424247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.424330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.424342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.424479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.424491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.424556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.424567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.424798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.424810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.424871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.724 [2024-07-12 14:32:50.424883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.724 qpair failed and we were unable to recover it. 00:27:58.724 [2024-07-12 14:32:50.424964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.424976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.425116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.425128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.425290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.425302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.425386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.425398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.425468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.425480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.425636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.425648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.425800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.425813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.425950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.425961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.426031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.426042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.426252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.426263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.426410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.426423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.426565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.426578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.426656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.426667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.426734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.426745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.426840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.426852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.426929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.426940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.427032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.427043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.427176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.427188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.427286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.427297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.427370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.427388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.427465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.427476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.427626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.427639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.427853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.427866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.427955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.427967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.428185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.428197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.428282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.428294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.428426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.428439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.428541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.428553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.428618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.428629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.428690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.428702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.428845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.428856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.428931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.428943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.429025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.429038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.429114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.429125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.429327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.429339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.429475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.429487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.429560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.429572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.429653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.725 [2024-07-12 14:32:50.429663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.725 qpair failed and we were unable to recover it. 00:27:58.725 [2024-07-12 14:32:50.429752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.429764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.429832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.429843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.429933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.429944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.430080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.430092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.430236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.430248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.430328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.430339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.430476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.430488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.430571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.430583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.430654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.430666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.430803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.430816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.430962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.430975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.431103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.431114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.431212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.431225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.431390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.431401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.431471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.431483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.431566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.431577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.431681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.431692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.431831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.431843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.431935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.431946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.432099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.432112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.432177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.432188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.432325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.432338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.432425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.432437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.432568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.432581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.432675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.432686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.432752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.432764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.432906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.432917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.432996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.433007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.433088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.433099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.433235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.433246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.433327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.433339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.433423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.433435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.726 [2024-07-12 14:32:50.433579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.726 [2024-07-12 14:32:50.433591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.726 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.433725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.433737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.433888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.433903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.433969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.433980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.434116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.434128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.434199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.434210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.434312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.434325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.434402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.434413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.434509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.434521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.434596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.434608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.434700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.434711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.434778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.434789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.434922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.434934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.435103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.435114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.435219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.435231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.435433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.435445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.435592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.435604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.435695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.435706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.435786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.435797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.435938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.435951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.436099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.436110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.436192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.436203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.436284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.436296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.436433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.436446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.436538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.436549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.436696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.436709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.436776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.436787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.436928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.436939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.437091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.437102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.437190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.437202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.437347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.437358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.437526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.437539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.437614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.437626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.437710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.437722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.437792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.437803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.437887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.437899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.437985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.437996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.438129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.727 [2024-07-12 14:32:50.438141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.727 qpair failed and we were unable to recover it. 00:27:58.727 [2024-07-12 14:32:50.438319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.438330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.438469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.438481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.438684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.438696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.438847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.438858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.439011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.439024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.439206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.439217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.439305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.439317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.439540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.439553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.439635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.439647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.439751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.439762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.439937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.439949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.440094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.440105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.440187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.440200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.440305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.440316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.440485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.440497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.440645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.440657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.440728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.440740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.440827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.440839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.440977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.440989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.441131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.441144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.441294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.441305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.441372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.441399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.441484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.441496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.441636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.441648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.441826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.441838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.441903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.441914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.442063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.442074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.442242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.442253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.442403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.442415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.442492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.442503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.442664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.442676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.442766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.442777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.442919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.442931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.443025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.443036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.443121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.443134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.443215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.443226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.443428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.443441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.443540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.443551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.443690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.443702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.728 qpair failed and we were unable to recover it. 00:27:58.728 [2024-07-12 14:32:50.443770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.728 [2024-07-12 14:32:50.443782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.443935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.443948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.444086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.444097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.444261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.444273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.444348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.444360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.444566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.444580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.444660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.444671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.444883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.444894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.445029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.445040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.445119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.445130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.445197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.445209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.445289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.445301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.445435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.445446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.445517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.445528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.445684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.445695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.445796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.445808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.445972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.445984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.446056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.446067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.446220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.446233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.446324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.446335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.446527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.446539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.446683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.446695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.446823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.446835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.447036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.447048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.447276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.447288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.447428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.447440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.447577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.447589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.447689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.447701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.447774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.447786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.447856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.447867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.448016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.448028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.448122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.448133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.448276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.448287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.448513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.448525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.448613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.448624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.448766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.448777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.448863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.448876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.448945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.448956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.449101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.449114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.449266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.449278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.449529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.729 [2024-07-12 14:32:50.449542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.729 qpair failed and we were unable to recover it. 00:27:58.729 [2024-07-12 14:32:50.449707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.449720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.449790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.449802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.449950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.449963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.450117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.450129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.450215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.450229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.450395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.450407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.450561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.450573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.450734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.450746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.450832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.450845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.450982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.450994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.451138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.451150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.451238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.451250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.451330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.451341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.451565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.451578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.451664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.451675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.451817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.451828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.452081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.452093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.452156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.452168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.452324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.452336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.452428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.452441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.452522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.452534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.452669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.452681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.452816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.452828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.452901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.452912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.453047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.453059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.453203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.453215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.453315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.453327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.453392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.453404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.453486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.453497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.453567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.453578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.453736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.453747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.453909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.453921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.454056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.454067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.454203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.454215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.454310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.454322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.454464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.454476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.454576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.454587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.454672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.454684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.454764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.454776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.454858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.454870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.730 qpair failed and we were unable to recover it. 00:27:58.730 [2024-07-12 14:32:50.455007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.730 [2024-07-12 14:32:50.455019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.455095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.455106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.455206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.455219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.455288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.455299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.455383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.455397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.455535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.455547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.455637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.455648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.455831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.455843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.455909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.455920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.456005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.456016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.456088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.456101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.456239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.456250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.456320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.456331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.456466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.456478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.456568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.456580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.456660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.456672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.456762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.456774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.456845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.456857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.456926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.456937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.457007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.457018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.457152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.457164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.457366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.731 [2024-07-12 14:32:50.457387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.731 qpair failed and we were unable to recover it. 00:27:58.731 [2024-07-12 14:32:50.457553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.457565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.457645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.457657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.457729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.457741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.457974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.457987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.458118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.458130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.458289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.458300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.458368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.458385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.458540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.458551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.458648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.458659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.458731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.458743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.458892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.458905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.459059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.459071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.459204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.459217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.459386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.459397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.459560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.459571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.459654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.459666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.459820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.459832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.459915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.459926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.460004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.460016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.460151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.460163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.460246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.460258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.460342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.460354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.460442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.460455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.460608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.460620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.460696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.460707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.460797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.460808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.460977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.732 [2024-07-12 14:32:50.460989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.732 qpair failed and we were unable to recover it. 00:27:58.732 [2024-07-12 14:32:50.461136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.461148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.461217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.461228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.461383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.461395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.461535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.461547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.461680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.461692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.461759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.461770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.461874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.461885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.462045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.462058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.462134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.462145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.462215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.462227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.462390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.462403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.462609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.462621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.462791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.462804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.462953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.462965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.463035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.463047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.463198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.463210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.463344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.463355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.463519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.463532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.463690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.463702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.463856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.463867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.463943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.463954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.464106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.464118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.464196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.464207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.464293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.464305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.464469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.464483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.464562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.464575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.464714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.464725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.464861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.464872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.465020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.465032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.465130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.733 [2024-07-12 14:32:50.465142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.733 qpair failed and we were unable to recover it. 00:27:58.733 [2024-07-12 14:32:50.465314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.465326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.465496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.465508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.465578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.465589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.465744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.465755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.465828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.465839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.466073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.466087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.466171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.466182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.466318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.466331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.466415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.466426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.466591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.466603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.466806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.466818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.466889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.466900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.466967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.466979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.467076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.467088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.467236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.467247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.467411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.467423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.467557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.467569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.467644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.467655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.467806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.467818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.467962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.467973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.468124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.468136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.468223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.468234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.468372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.468392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.468476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.468487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.468558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.468569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.468718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.468730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.468803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.468814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.469022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.469035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.469178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.469189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.469327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.469338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.469429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.469441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.469511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.469523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.469664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.469676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.469810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.469822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.469976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.469988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.470124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.470136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.470357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.470368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.470449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.470462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.470619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.734 [2024-07-12 14:32:50.470630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.734 qpair failed and we were unable to recover it. 00:27:58.734 [2024-07-12 14:32:50.470771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.470783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.470870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.470880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.470972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.470984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.471050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.471062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.471141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.471152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.471245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.471257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.471400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.471413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.471596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.471608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.471740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.471751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.471957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.471968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.472131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.472143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.472293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.472305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.472465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.472478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.472561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.472573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.472653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.472665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.472799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.472810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.472982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.472994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.473151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.473162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.473240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.473252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.473365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.473389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.473466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.473478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.473628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.473641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.473795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.473806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.473960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.473972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.474052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.474064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.474140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.474153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.474292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.474304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.474505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.474517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.474659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.474671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.474876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.474888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.474972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.474984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.475085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.475096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.475243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.475255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f924c000b90 with addr=10.0.0.2, port=4420 00:27:58.735 qpair failed and we were unable to recover it. 00:27:58.735 [2024-07-12 14:32:50.475367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.735 [2024-07-12 14:32:50.475399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1226000 with addr=10.0.0.2, port=4420 00:27:58.735 [2024-07-12 14:32:50.475411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1226000 is same with the state(5) to be set 00:27:58.735 [2024-07-12 14:32:50.475429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1226000 (9): Bad file descriptor 00:27:58.735 [2024-07-12 14:32:50.475444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.735 [2024-07-12 14:32:50.475454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.736 [2024-07-12 14:32:50.475466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.736 Unable to reset the controller. 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.014 Malloc0 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.014 [2024-07-12 14:32:50.863586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.014 [2024-07-12 14:32:50.888581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.014 14:32:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2704387 00:27:59.582 Controller properly reset. 00:28:04.852 Initializing NVMe Controllers 00:28:04.852 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:04.852 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:04.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:04.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:04.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:04.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:04.852 Initialization complete. Launching workers. 00:28:04.852 Starting thread on core 1 00:28:04.852 Starting thread on core 2 00:28:04.852 Starting thread on core 3 00:28:04.852 Starting thread on core 0 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:04.852 00:28:04.852 real 0m11.255s 00:28:04.852 user 0m36.657s 00:28:04.852 sys 0m5.518s 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.852 ************************************ 00:28:04.852 END TEST nvmf_target_disconnect_tc2 00:28:04.852 ************************************ 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:04.852 rmmod nvme_tcp 00:28:04.852 rmmod nvme_fabrics 00:28:04.852 rmmod nvme_keyring 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2704976 ']' 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2704976 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2704976 ']' 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2704976 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:04.852 14:32:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2704976 00:28:04.853 14:32:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:28:04.853 14:32:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:28:04.853 14:32:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2704976' 00:28:04.853 killing process with pid 2704976 00:28:04.853 14:32:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2704976 00:28:04.853 14:32:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2704976 00:28:04.853 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:04.853 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:04.853 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:04.853 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:04.853 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:04.853 14:32:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.853 14:32:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:04.853 14:32:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.757 14:32:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:06.757 00:28:06.757 real 0m19.173s 00:28:06.757 user 1m3.382s 00:28:06.757 sys 0m10.004s 00:28:06.757 14:32:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:06.757 14:32:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:06.757 ************************************ 00:28:06.757 END TEST nvmf_target_disconnect 00:28:06.757 ************************************ 00:28:06.757 14:32:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:06.757 14:32:58 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:28:06.757 14:32:58 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:06.757 14:32:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:06.757 14:32:58 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:28:06.757 00:28:06.757 real 20m51.391s 00:28:06.757 user 45m23.998s 00:28:06.757 sys 6m22.201s 00:28:06.757 14:32:58 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:06.757 14:32:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:06.757 ************************************ 00:28:06.757 END TEST nvmf_tcp 00:28:06.757 ************************************ 00:28:06.757 14:32:58 -- common/autotest_common.sh@1142 -- # return 0 00:28:06.757 14:32:58 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:28:06.757 14:32:58 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:06.757 14:32:58 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:06.757 14:32:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:06.757 14:32:58 -- common/autotest_common.sh@10 -- # set +x 00:28:07.016 ************************************ 00:28:07.016 START TEST spdkcli_nvmf_tcp 00:28:07.016 ************************************ 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:07.016 * Looking for test storage... 00:28:07.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2706612 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2706612 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2706612 ']' 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:07.016 14:32:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:07.016 [2024-07-12 14:32:58.941986] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:28:07.016 [2024-07-12 14:32:58.942035] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2706612 ] 00:28:07.016 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.016 [2024-07-12 14:32:58.996094] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:07.275 [2024-07-12 14:32:59.077593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.275 [2024-07-12 14:32:59.077597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.841 14:32:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:07.841 14:32:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:28:07.841 14:32:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:07.841 14:32:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:07.841 14:32:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:07.841 14:32:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:07.841 14:32:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:07.841 14:32:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:07.841 14:32:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:07.841 14:32:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:07.841 14:32:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:07.841 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:07.841 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:07.841 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:07.841 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:07.842 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:07.842 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:07.842 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:07.842 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:07.842 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:07.842 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:07.842 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:07.842 ' 00:28:10.374 [2024-07-12 14:33:02.165969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.749 [2024-07-12 14:33:03.341907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:28:13.653 [2024-07-12 14:33:05.504504] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:28:15.550 [2024-07-12 14:33:07.362266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:28:16.926 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:16.926 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:16.926 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:16.926 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:16.926 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:16.926 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:16.926 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:16.927 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:16.927 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:16.927 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:16.927 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:16.927 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:16.927 14:33:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:16.927 14:33:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:16.927 14:33:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:17.185 14:33:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:17.185 14:33:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:17.185 14:33:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:17.185 14:33:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:28:17.185 14:33:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:17.443 14:33:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:17.443 14:33:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:17.443 14:33:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:17.443 14:33:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:17.443 14:33:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:17.443 14:33:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:17.443 14:33:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:17.443 14:33:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:17.443 14:33:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:17.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:17.444 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:17.444 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:17.444 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:28:17.444 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:28:17.444 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:17.444 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:17.444 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:17.444 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:17.444 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:17.444 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:17.444 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:17.444 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:17.444 ' 00:28:22.712 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:22.712 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:22.712 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:22.712 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:22.712 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:28:22.712 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:28:22.712 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:22.712 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:22.712 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:22.712 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:22.712 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:22.712 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:22.712 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:22.712 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:22.712 14:33:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:22.712 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:22.712 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:22.712 14:33:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2706612 00:28:22.712 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2706612 ']' 00:28:22.712 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2706612 00:28:22.712 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:28:22.712 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:22.712 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2706612 00:28:22.712 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:22.712 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:22.712 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2706612' 00:28:22.712 killing process with pid 2706612 00:28:22.712 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2706612 00:28:22.712 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2706612 00:28:22.712 14:33:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:28:22.712 14:33:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:28:22.713 14:33:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2706612 ']' 00:28:22.713 14:33:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2706612 00:28:22.713 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2706612 ']' 00:28:22.713 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2706612 00:28:22.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2706612) - No such process 00:28:22.713 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2706612 is not found' 00:28:22.713 Process with pid 2706612 is not found 00:28:22.713 14:33:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:22.713 14:33:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:22.713 14:33:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:22.713 00:28:22.713 real 0m15.811s 00:28:22.713 user 0m32.756s 00:28:22.713 sys 0m0.702s 00:28:22.713 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:22.713 14:33:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:22.713 ************************************ 00:28:22.713 END TEST spdkcli_nvmf_tcp 00:28:22.713 ************************************ 00:28:22.713 14:33:14 -- common/autotest_common.sh@1142 -- # return 0 00:28:22.713 14:33:14 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:22.713 14:33:14 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:22.713 14:33:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:22.713 14:33:14 -- common/autotest_common.sh@10 -- # set +x 00:28:22.713 ************************************ 00:28:22.713 START TEST nvmf_identify_passthru 00:28:22.713 ************************************ 00:28:22.713 14:33:14 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:22.972 * Looking for test storage... 00:28:22.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:22.972 14:33:14 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:22.972 14:33:14 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.972 14:33:14 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.972 14:33:14 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.972 14:33:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.972 14:33:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.972 14:33:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.972 14:33:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:22.972 14:33:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:22.972 14:33:14 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:22.972 14:33:14 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.972 14:33:14 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.972 14:33:14 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.972 14:33:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.972 14:33:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.972 14:33:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.972 14:33:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:22.972 14:33:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.972 14:33:14 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.972 14:33:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:22.972 14:33:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:22.972 14:33:14 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:28:22.972 14:33:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:28.276 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:28.277 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:28.277 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:28.277 Found net devices under 0000:86:00.0: cvl_0_0 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:28.277 Found net devices under 0000:86:00.1: cvl_0_1 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:28.277 14:33:19 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.277 14:33:20 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.277 14:33:20 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.277 14:33:20 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:28.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:28:28.277 00:28:28.277 --- 10.0.0.2 ping statistics --- 00:28:28.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.277 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:28:28.277 14:33:20 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:28:28.277 00:28:28.277 --- 10.0.0.1 ping statistics --- 00:28:28.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.277 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:28:28.277 14:33:20 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.277 14:33:20 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:28:28.277 14:33:20 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:28.277 14:33:20 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.277 14:33:20 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:28.277 14:33:20 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:28.277 14:33:20 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.277 14:33:20 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:28.277 14:33:20 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:28.277 14:33:20 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:28:28.277 14:33:20 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:28.277 14:33:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:28.277 14:33:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:28:28.277 14:33:20 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:28:28.277 14:33:20 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:28:28.277 14:33:20 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:28:28.277 14:33:20 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:28:28.277 14:33:20 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:28.277 14:33:20 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:28:28.277 14:33:20 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:28.277 14:33:20 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:28.277 14:33:20 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:28.277 14:33:20 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:28:28.277 14:33:20 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:28:28.277 14:33:20 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:28:28.277 14:33:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:28:28.277 14:33:20 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:28:28.277 14:33:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:28:28.277 14:33:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:28.277 14:33:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:28.277 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.470 14:33:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:28:32.470 14:33:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:32.470 14:33:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:32.470 14:33:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:32.470 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.663 14:33:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:36.663 14:33:28 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:36.663 14:33:28 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:36.663 14:33:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:36.663 14:33:28 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:36.663 14:33:28 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:36.663 14:33:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:36.663 14:33:28 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2714147 00:28:36.663 14:33:28 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:36.663 14:33:28 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:36.663 14:33:28 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2714147 00:28:36.663 14:33:28 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2714147 ']' 00:28:36.663 14:33:28 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.663 14:33:28 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:36.663 14:33:28 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.663 14:33:28 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:36.663 14:33:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:36.663 [2024-07-12 14:33:28.475304] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:28:36.663 [2024-07-12 14:33:28.475348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.663 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.663 [2024-07-12 14:33:28.531092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:36.663 [2024-07-12 14:33:28.611531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.663 [2024-07-12 14:33:28.611566] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.663 [2024-07-12 14:33:28.611576] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.663 [2024-07-12 14:33:28.611582] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.663 [2024-07-12 14:33:28.611586] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.663 [2024-07-12 14:33:28.611636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.663 [2024-07-12 14:33:28.611653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.663 [2024-07-12 14:33:28.611741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.663 [2024-07-12 14:33:28.611743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.599 14:33:29 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:37.599 14:33:29 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:28:37.599 14:33:29 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:37.599 14:33:29 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.599 14:33:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:37.599 INFO: Log level set to 20 00:28:37.599 INFO: Requests: 00:28:37.599 { 00:28:37.599 "jsonrpc": "2.0", 00:28:37.599 "method": "nvmf_set_config", 00:28:37.599 "id": 1, 00:28:37.599 "params": { 00:28:37.599 "admin_cmd_passthru": { 00:28:37.599 "identify_ctrlr": true 00:28:37.599 } 00:28:37.599 } 00:28:37.599 } 00:28:37.599 00:28:37.599 INFO: response: 00:28:37.599 { 00:28:37.599 "jsonrpc": "2.0", 00:28:37.599 "id": 1, 00:28:37.599 "result": true 00:28:37.599 } 00:28:37.599 00:28:37.599 14:33:29 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.599 14:33:29 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:37.599 14:33:29 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.599 14:33:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:37.599 INFO: Setting log level to 20 00:28:37.599 INFO: Setting log level to 20 00:28:37.599 INFO: Log level set to 20 00:28:37.599 INFO: Log level set to 20 00:28:37.599 INFO: Requests: 00:28:37.599 { 00:28:37.599 "jsonrpc": "2.0", 00:28:37.599 "method": "framework_start_init", 00:28:37.599 "id": 1 00:28:37.599 } 00:28:37.599 00:28:37.599 INFO: Requests: 00:28:37.599 { 00:28:37.599 "jsonrpc": "2.0", 00:28:37.599 "method": "framework_start_init", 00:28:37.599 "id": 1 00:28:37.599 } 00:28:37.599 00:28:37.599 [2024-07-12 14:33:29.377245] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:37.599 INFO: response: 00:28:37.599 { 00:28:37.599 "jsonrpc": "2.0", 00:28:37.599 "id": 1, 00:28:37.599 "result": true 00:28:37.599 } 00:28:37.599 00:28:37.599 INFO: response: 00:28:37.599 { 00:28:37.599 "jsonrpc": "2.0", 00:28:37.599 "id": 1, 00:28:37.599 "result": true 00:28:37.599 } 00:28:37.599 00:28:37.599 14:33:29 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.599 14:33:29 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:37.599 14:33:29 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.599 14:33:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:37.599 INFO: Setting log level to 40 00:28:37.599 INFO: Setting log level to 40 00:28:37.599 INFO: Setting log level to 40 00:28:37.599 [2024-07-12 14:33:29.390685] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.599 14:33:29 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.599 14:33:29 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:37.599 14:33:29 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:37.599 14:33:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:37.600 14:33:29 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:28:37.600 14:33:29 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.600 14:33:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:40.888 Nvme0n1 00:28:40.888 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.888 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:40.888 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.888 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:40.888 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.888 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:40.888 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.888 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:40.888 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.888 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:40.888 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.888 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:40.888 [2024-07-12 14:33:32.280430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.888 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.888 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:40.888 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.888 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:40.888 [ 00:28:40.888 { 00:28:40.888 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:40.888 "subtype": "Discovery", 00:28:40.888 "listen_addresses": [], 00:28:40.888 "allow_any_host": true, 00:28:40.888 "hosts": [] 00:28:40.888 }, 00:28:40.888 { 00:28:40.888 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:40.888 "subtype": "NVMe", 00:28:40.888 "listen_addresses": [ 00:28:40.888 { 00:28:40.888 "trtype": "TCP", 00:28:40.888 "adrfam": "IPv4", 00:28:40.888 "traddr": "10.0.0.2", 00:28:40.888 "trsvcid": "4420" 00:28:40.888 } 00:28:40.888 ], 00:28:40.888 "allow_any_host": true, 00:28:40.888 "hosts": [], 00:28:40.888 "serial_number": "SPDK00000000000001", 00:28:40.888 "model_number": "SPDK bdev Controller", 00:28:40.888 "max_namespaces": 1, 00:28:40.888 "min_cntlid": 1, 00:28:40.888 "max_cntlid": 65519, 00:28:40.888 "namespaces": [ 00:28:40.888 { 00:28:40.888 "nsid": 1, 00:28:40.888 "bdev_name": "Nvme0n1", 00:28:40.888 "name": "Nvme0n1", 00:28:40.888 "nguid": "4B8DFE00A432413185531C0F16853CEF", 00:28:40.888 "uuid": "4b8dfe00-a432-4131-8553-1c0f16853cef" 00:28:40.888 } 00:28:40.888 ] 00:28:40.888 } 00:28:40.888 ] 00:28:40.888 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.888 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:40.889 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:40.889 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:40.889 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.889 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:28:40.889 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:40.889 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:40.889 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:40.889 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.889 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:40.889 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:28:40.889 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:40.889 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:40.889 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.889 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:40.889 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.889 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:40.889 14:33:32 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:40.889 14:33:32 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:40.889 14:33:32 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:40.889 14:33:32 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:40.889 14:33:32 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:40.889 14:33:32 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:40.889 14:33:32 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:40.889 rmmod nvme_tcp 00:28:40.889 rmmod nvme_fabrics 00:28:40.889 rmmod nvme_keyring 00:28:40.889 14:33:32 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:40.889 14:33:32 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:40.889 14:33:32 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:40.889 14:33:32 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2714147 ']' 00:28:40.889 14:33:32 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2714147 00:28:40.889 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2714147 ']' 00:28:40.889 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2714147 00:28:40.889 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:28:40.889 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:40.889 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2714147 00:28:40.889 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:40.889 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:40.889 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2714147' 00:28:40.889 killing process with pid 2714147 00:28:40.889 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2714147 00:28:40.889 14:33:32 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2714147 00:28:42.794 14:33:34 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:42.794 14:33:34 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:42.794 14:33:34 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:42.794 14:33:34 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:42.794 14:33:34 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:42.794 14:33:34 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.794 14:33:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:42.794 14:33:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.699 14:33:36 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:44.699 00:28:44.699 real 0m21.708s 00:28:44.699 user 0m30.110s 00:28:44.699 sys 0m4.756s 00:28:44.699 14:33:36 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:44.699 14:33:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:44.699 ************************************ 00:28:44.699 END TEST nvmf_identify_passthru 00:28:44.699 ************************************ 00:28:44.699 14:33:36 -- common/autotest_common.sh@1142 -- # return 0 00:28:44.699 14:33:36 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:44.699 14:33:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:44.699 14:33:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:44.699 14:33:36 -- common/autotest_common.sh@10 -- # set +x 00:28:44.699 ************************************ 00:28:44.699 START TEST nvmf_dif 00:28:44.699 ************************************ 00:28:44.699 14:33:36 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:44.699 * Looking for test storage... 00:28:44.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:44.700 14:33:36 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.700 14:33:36 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.700 14:33:36 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.700 14:33:36 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.700 14:33:36 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.700 14:33:36 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.700 14:33:36 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.700 14:33:36 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:44.700 14:33:36 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:44.700 14:33:36 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:44.700 14:33:36 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:44.700 14:33:36 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:44.700 14:33:36 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:44.700 14:33:36 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.700 14:33:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:44.700 14:33:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:44.700 14:33:36 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:44.700 14:33:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:49.974 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:49.974 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:49.974 Found net devices under 0000:86:00.0: cvl_0_0 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.974 14:33:41 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:49.975 Found net devices under 0000:86:00.1: cvl_0_1 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:49.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:49.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:28:49.975 00:28:49.975 --- 10.0.0.2 ping statistics --- 00:28:49.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.975 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:49.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:49.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:28:49.975 00:28:49.975 --- 10.0.0.1 ping statistics --- 00:28:49.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.975 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:49.975 14:33:41 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:52.509 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:52.509 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:52.509 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:52.509 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:52.509 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:52.509 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:52.509 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:52.509 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:52.509 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:52.509 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:52.509 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:52.509 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:52.509 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:52.509 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:52.509 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:52.509 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:52.509 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:52.509 14:33:44 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.509 14:33:44 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:52.509 14:33:44 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:52.509 14:33:44 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.509 14:33:44 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:52.509 14:33:44 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:52.509 14:33:44 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:52.509 14:33:44 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:52.509 14:33:44 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:52.509 14:33:44 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:52.509 14:33:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:52.509 14:33:44 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2719606 00:28:52.509 14:33:44 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2719606 00:28:52.509 14:33:44 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:52.509 14:33:44 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2719606 ']' 00:28:52.509 14:33:44 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.509 14:33:44 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:52.509 14:33:44 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.509 14:33:44 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:52.509 14:33:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:52.509 [2024-07-12 14:33:44.232162] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:28:52.509 [2024-07-12 14:33:44.232198] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.509 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.509 [2024-07-12 14:33:44.290334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.509 [2024-07-12 14:33:44.368018] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.509 [2024-07-12 14:33:44.368052] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.509 [2024-07-12 14:33:44.368059] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.509 [2024-07-12 14:33:44.368065] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.509 [2024-07-12 14:33:44.368073] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.509 [2024-07-12 14:33:44.368091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.075 14:33:45 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:53.075 14:33:45 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:28:53.075 14:33:45 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:53.075 14:33:45 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:53.075 14:33:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:53.075 14:33:45 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.075 14:33:45 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:53.075 14:33:45 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:53.075 14:33:45 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.075 14:33:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:53.075 [2024-07-12 14:33:45.065797] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.075 14:33:45 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.075 14:33:45 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:53.075 14:33:45 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:53.075 14:33:45 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:53.075 14:33:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:53.335 ************************************ 00:28:53.335 START TEST fio_dif_1_default 00:28:53.335 ************************************ 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:53.335 bdev_null0 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:53.335 [2024-07-12 14:33:45.138080] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:53.335 { 00:28:53.335 "params": { 00:28:53.335 "name": "Nvme$subsystem", 00:28:53.335 "trtype": "$TEST_TRANSPORT", 00:28:53.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.335 "adrfam": "ipv4", 00:28:53.335 "trsvcid": "$NVMF_PORT", 00:28:53.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.335 "hdgst": ${hdgst:-false}, 00:28:53.335 "ddgst": ${ddgst:-false} 00:28:53.335 }, 00:28:53.335 "method": "bdev_nvme_attach_controller" 00:28:53.335 } 00:28:53.335 EOF 00:28:53.335 )") 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:53.335 "params": { 00:28:53.335 "name": "Nvme0", 00:28:53.335 "trtype": "tcp", 00:28:53.335 "traddr": "10.0.0.2", 00:28:53.335 "adrfam": "ipv4", 00:28:53.335 "trsvcid": "4420", 00:28:53.335 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:53.335 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:53.335 "hdgst": false, 00:28:53.335 "ddgst": false 00:28:53.335 }, 00:28:53.335 "method": "bdev_nvme_attach_controller" 00:28:53.335 }' 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:53.335 14:33:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:53.594 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:53.594 fio-3.35 00:28:53.594 Starting 1 thread 00:28:53.594 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.813 00:29:05.813 filename0: (groupid=0, jobs=1): err= 0: pid=2719984: Fri Jul 12 14:33:56 2024 00:29:05.813 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10003msec) 00:29:05.813 slat (nsec): min=5599, max=25306, avg=6276.37, stdev=1334.78 00:29:05.813 clat (usec): min=400, max=47509, avg=21040.69, stdev=20501.84 00:29:05.813 lat (usec): min=406, max=47535, avg=21046.97, stdev=20501.72 00:29:05.813 clat percentiles (usec): 00:29:05.813 | 1.00th=[ 441], 5.00th=[ 453], 10.00th=[ 474], 20.00th=[ 486], 00:29:05.813 | 30.00th=[ 494], 40.00th=[ 510], 50.00th=[40633], 60.00th=[41157], 00:29:05.813 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:29:05.813 | 99.00th=[42206], 99.50th=[42730], 99.90th=[47449], 99.95th=[47449], 00:29:05.813 | 99.99th=[47449] 00:29:05.813 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=761.26, stdev=20.18, samples=19 00:29:05.813 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:29:05.813 lat (usec) : 500=36.37%, 750=13.53% 00:29:05.813 lat (msec) : 50=50.11% 00:29:05.813 cpu : usr=94.35%, sys=5.41%, ctx=16, majf=0, minf=214 00:29:05.813 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:05.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.813 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.813 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:05.813 00:29:05.813 Run status group 0 (all jobs): 00:29:05.813 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10003-10003msec 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.813 00:29:05.813 real 0m11.109s 00:29:05.813 user 0m15.798s 00:29:05.813 sys 0m0.794s 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:05.813 ************************************ 00:29:05.813 END TEST fio_dif_1_default 00:29:05.813 ************************************ 00:29:05.813 14:33:56 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:05.813 14:33:56 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:05.813 14:33:56 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:05.813 14:33:56 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:05.813 14:33:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:05.813 ************************************ 00:29:05.813 START TEST fio_dif_1_multi_subsystems 00:29:05.813 ************************************ 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:05.813 bdev_null0 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:05.813 [2024-07-12 14:33:56.317996] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:05.813 bdev_null1 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.813 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.814 { 00:29:05.814 "params": { 00:29:05.814 "name": "Nvme$subsystem", 00:29:05.814 "trtype": "$TEST_TRANSPORT", 00:29:05.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.814 "adrfam": "ipv4", 00:29:05.814 "trsvcid": "$NVMF_PORT", 00:29:05.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.814 "hdgst": ${hdgst:-false}, 00:29:05.814 "ddgst": ${ddgst:-false} 00:29:05.814 }, 00:29:05.814 "method": "bdev_nvme_attach_controller" 00:29:05.814 } 00:29:05.814 EOF 00:29:05.814 )") 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.814 { 00:29:05.814 "params": { 00:29:05.814 "name": "Nvme$subsystem", 00:29:05.814 "trtype": "$TEST_TRANSPORT", 00:29:05.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.814 "adrfam": "ipv4", 00:29:05.814 "trsvcid": "$NVMF_PORT", 00:29:05.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.814 "hdgst": ${hdgst:-false}, 00:29:05.814 "ddgst": ${ddgst:-false} 00:29:05.814 }, 00:29:05.814 "method": "bdev_nvme_attach_controller" 00:29:05.814 } 00:29:05.814 EOF 00:29:05.814 )") 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:05.814 "params": { 00:29:05.814 "name": "Nvme0", 00:29:05.814 "trtype": "tcp", 00:29:05.814 "traddr": "10.0.0.2", 00:29:05.814 "adrfam": "ipv4", 00:29:05.814 "trsvcid": "4420", 00:29:05.814 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:05.814 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:05.814 "hdgst": false, 00:29:05.814 "ddgst": false 00:29:05.814 }, 00:29:05.814 "method": "bdev_nvme_attach_controller" 00:29:05.814 },{ 00:29:05.814 "params": { 00:29:05.814 "name": "Nvme1", 00:29:05.814 "trtype": "tcp", 00:29:05.814 "traddr": "10.0.0.2", 00:29:05.814 "adrfam": "ipv4", 00:29:05.814 "trsvcid": "4420", 00:29:05.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:05.814 "hdgst": false, 00:29:05.814 "ddgst": false 00:29:05.814 }, 00:29:05.814 "method": "bdev_nvme_attach_controller" 00:29:05.814 }' 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:05.814 14:33:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:05.814 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:05.814 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:05.814 fio-3.35 00:29:05.814 Starting 2 threads 00:29:05.814 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.874 00:29:15.874 filename0: (groupid=0, jobs=1): err= 0: pid=2721951: Fri Jul 12 14:34:07 2024 00:29:15.874 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10013msec) 00:29:15.874 slat (nsec): min=2966, max=24144, avg=7944.30, stdev=2826.74 00:29:15.874 clat (usec): min=40814, max=46306, avg=41010.85, stdev=359.76 00:29:15.874 lat (usec): min=40820, max=46316, avg=41018.80, stdev=359.74 00:29:15.874 clat percentiles (usec): 00:29:15.874 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:29:15.874 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:15.874 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:15.874 | 99.00th=[41681], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:29:15.874 | 99.99th=[46400] 00:29:15.874 bw ( KiB/s): min= 384, max= 416, per=49.76%, avg=388.80, stdev=11.72, samples=20 00:29:15.874 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:29:15.874 lat (msec) : 50=100.00% 00:29:15.874 cpu : usr=97.56%, sys=2.19%, ctx=13, majf=0, minf=75 00:29:15.874 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:15.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.874 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:15.874 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:15.874 filename1: (groupid=0, jobs=1): err= 0: pid=2721952: Fri Jul 12 14:34:07 2024 00:29:15.874 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:29:15.874 slat (nsec): min=4271, max=34478, avg=8021.69, stdev=2935.96 00:29:15.874 clat (usec): min=40803, max=45385, avg=40998.63, stdev=291.71 00:29:15.874 lat (usec): min=40810, max=45398, avg=41006.65, stdev=291.72 00:29:15.874 clat percentiles (usec): 00:29:15.874 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:29:15.874 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:15.874 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:15.874 | 99.00th=[41157], 99.50th=[41681], 99.90th=[45351], 99.95th=[45351], 00:29:15.874 | 99.99th=[45351] 00:29:15.874 bw ( KiB/s): min= 384, max= 416, per=49.76%, avg=388.80, stdev=11.72, samples=20 00:29:15.874 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:29:15.874 lat (msec) : 50=100.00% 00:29:15.874 cpu : usr=97.51%, sys=2.24%, ctx=14, majf=0, minf=177 00:29:15.874 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:15.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.874 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:15.874 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:15.874 00:29:15.874 Run status group 0 (all jobs): 00:29:15.874 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10010-10013msec 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.874 00:29:15.874 real 0m11.311s 00:29:15.874 user 0m25.764s 00:29:15.874 sys 0m0.739s 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:15.874 14:34:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.874 ************************************ 00:29:15.874 END TEST fio_dif_1_multi_subsystems 00:29:15.874 ************************************ 00:29:15.874 14:34:07 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:15.874 14:34:07 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:15.874 14:34:07 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:15.874 14:34:07 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:15.874 14:34:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:15.874 ************************************ 00:29:15.874 START TEST fio_dif_rand_params 00:29:15.874 ************************************ 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:15.874 bdev_null0 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:15.874 [2024-07-12 14:34:07.696003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:15.874 14:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:15.875 { 00:29:15.875 "params": { 00:29:15.875 "name": "Nvme$subsystem", 00:29:15.875 "trtype": "$TEST_TRANSPORT", 00:29:15.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.875 "adrfam": "ipv4", 00:29:15.875 "trsvcid": "$NVMF_PORT", 00:29:15.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.875 "hdgst": ${hdgst:-false}, 00:29:15.875 "ddgst": ${ddgst:-false} 00:29:15.875 }, 00:29:15.875 "method": "bdev_nvme_attach_controller" 00:29:15.875 } 00:29:15.875 EOF 00:29:15.875 )") 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:15.875 "params": { 00:29:15.875 "name": "Nvme0", 00:29:15.875 "trtype": "tcp", 00:29:15.875 "traddr": "10.0.0.2", 00:29:15.875 "adrfam": "ipv4", 00:29:15.875 "trsvcid": "4420", 00:29:15.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:15.875 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:15.875 "hdgst": false, 00:29:15.875 "ddgst": false 00:29:15.875 }, 00:29:15.875 "method": "bdev_nvme_attach_controller" 00:29:15.875 }' 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:15.875 14:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:16.134 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:16.134 ... 00:29:16.134 fio-3.35 00:29:16.134 Starting 3 threads 00:29:16.134 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.699 00:29:22.699 filename0: (groupid=0, jobs=1): err= 0: pid=2723915: Fri Jul 12 14:34:13 2024 00:29:22.699 read: IOPS=285, BW=35.7MiB/s (37.5MB/s)(180MiB/5045msec) 00:29:22.699 slat (nsec): min=3313, max=22405, avg=10432.01, stdev=2444.45 00:29:22.699 clat (usec): min=3278, max=90330, avg=10452.80, stdev=11122.84 00:29:22.699 lat (usec): min=3284, max=90342, avg=10463.23, stdev=11122.79 00:29:22.699 clat percentiles (usec): 00:29:22.699 | 1.00th=[ 3818], 5.00th=[ 4555], 10.00th=[ 5669], 20.00th=[ 6194], 00:29:22.699 | 30.00th=[ 6652], 40.00th=[ 7373], 50.00th=[ 7963], 60.00th=[ 8455], 00:29:22.699 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[47973], 00:29:22.699 | 99.00th=[50070], 99.50th=[50594], 99.90th=[90702], 99.95th=[90702], 00:29:22.699 | 99.99th=[90702] 00:29:22.699 bw ( KiB/s): min=26624, max=46848, per=32.64%, avg=36864.00, stdev=6671.30, samples=10 00:29:22.699 iops : min= 208, max= 366, avg=288.00, stdev=52.12, samples=10 00:29:22.699 lat (msec) : 4=2.57%, 10=89.11%, 20=1.39%, 50=5.96%, 100=0.97% 00:29:22.699 cpu : usr=94.23%, sys=5.37%, ctx=16, majf=0, minf=70 00:29:22.699 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:22.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.699 issued rwts: total=1442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:22.699 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:22.699 filename0: (groupid=0, jobs=1): err= 0: pid=2723916: Fri Jul 12 14:34:13 2024 00:29:22.699 read: IOPS=276, BW=34.6MiB/s (36.3MB/s)(173MiB/5004msec) 00:29:22.699 slat (nsec): min=6236, max=28110, avg=10549.71, stdev=2481.62 00:29:22.699 clat (usec): min=3514, max=90268, avg=10832.01, stdev=11031.05 00:29:22.699 lat (usec): min=3521, max=90279, avg=10842.56, stdev=11031.06 00:29:22.699 clat percentiles (usec): 00:29:22.699 | 1.00th=[ 3982], 5.00th=[ 4490], 10.00th=[ 5997], 20.00th=[ 6521], 00:29:22.700 | 30.00th=[ 6915], 40.00th=[ 7767], 50.00th=[ 8455], 60.00th=[ 8848], 00:29:22.700 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10552], 95.00th=[47449], 00:29:22.700 | 99.00th=[51119], 99.50th=[51119], 99.90th=[88605], 99.95th=[90702], 00:29:22.700 | 99.99th=[90702] 00:29:22.700 bw ( KiB/s): min=15360, max=49152, per=31.36%, avg=35411.10, stdev=9936.52, samples=10 00:29:22.700 iops : min= 120, max= 384, avg=276.60, stdev=77.65, samples=10 00:29:22.700 lat (msec) : 4=1.01%, 10=83.38%, 20=8.74%, 50=4.70%, 100=2.17% 00:29:22.700 cpu : usr=94.64%, sys=5.02%, ctx=14, majf=0, minf=82 00:29:22.700 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:22.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.700 issued rwts: total=1384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:22.700 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:22.700 filename0: (groupid=0, jobs=1): err= 0: pid=2723917: Fri Jul 12 14:34:13 2024 00:29:22.700 read: IOPS=324, BW=40.6MiB/s (42.6MB/s)(203MiB/5002msec) 00:29:22.700 slat (nsec): min=6241, max=87919, avg=10350.19, stdev=3173.45 00:29:22.700 clat (usec): min=3494, max=91345, avg=9221.86, stdev=7360.20 00:29:22.700 lat (usec): min=3504, max=91357, avg=9232.21, stdev=7360.55 00:29:22.700 clat percentiles (usec): 00:29:22.700 | 1.00th=[ 3884], 5.00th=[ 4113], 10.00th=[ 5211], 20.00th=[ 6063], 00:29:22.700 | 30.00th=[ 6456], 40.00th=[ 6783], 50.00th=[ 7439], 60.00th=[ 8717], 00:29:22.700 | 70.00th=[ 9765], 80.00th=[10945], 90.00th=[11994], 95.00th=[12911], 00:29:22.700 | 99.00th=[49021], 99.50th=[50594], 99.90th=[50594], 99.95th=[91751], 00:29:22.700 | 99.99th=[91751] 00:29:22.700 bw ( KiB/s): min=32768, max=50944, per=35.49%, avg=40078.22, stdev=6093.27, samples=9 00:29:22.700 iops : min= 256, max= 398, avg=313.11, stdev=47.60, samples=9 00:29:22.700 lat (msec) : 4=2.83%, 10=68.86%, 20=25.42%, 50=2.34%, 100=0.55% 00:29:22.700 cpu : usr=94.94%, sys=4.78%, ctx=11, majf=0, minf=164 00:29:22.700 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:22.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.700 issued rwts: total=1625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:22.700 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:22.700 00:29:22.700 Run status group 0 (all jobs): 00:29:22.700 READ: bw=110MiB/s (116MB/s), 34.6MiB/s-40.6MiB/s (36.3MB/s-42.6MB/s), io=556MiB (583MB), run=5002-5045msec 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.700 bdev_null0 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.700 [2024-07-12 14:34:13.835293] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.700 bdev_null1 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.700 bdev_null2 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:22.700 { 00:29:22.700 "params": { 00:29:22.700 "name": "Nvme$subsystem", 00:29:22.700 "trtype": "$TEST_TRANSPORT", 00:29:22.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.700 "adrfam": "ipv4", 00:29:22.700 "trsvcid": "$NVMF_PORT", 00:29:22.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.700 "hdgst": ${hdgst:-false}, 00:29:22.700 "ddgst": ${ddgst:-false} 00:29:22.700 }, 00:29:22.700 "method": "bdev_nvme_attach_controller" 00:29:22.700 } 00:29:22.700 EOF 00:29:22.700 )") 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:22.700 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:22.701 { 00:29:22.701 "params": { 00:29:22.701 "name": "Nvme$subsystem", 00:29:22.701 "trtype": "$TEST_TRANSPORT", 00:29:22.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.701 "adrfam": "ipv4", 00:29:22.701 "trsvcid": "$NVMF_PORT", 00:29:22.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.701 "hdgst": ${hdgst:-false}, 00:29:22.701 "ddgst": ${ddgst:-false} 00:29:22.701 }, 00:29:22.701 "method": "bdev_nvme_attach_controller" 00:29:22.701 } 00:29:22.701 EOF 00:29:22.701 )") 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:22.701 { 00:29:22.701 "params": { 00:29:22.701 "name": "Nvme$subsystem", 00:29:22.701 "trtype": "$TEST_TRANSPORT", 00:29:22.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.701 "adrfam": "ipv4", 00:29:22.701 "trsvcid": "$NVMF_PORT", 00:29:22.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.701 "hdgst": ${hdgst:-false}, 00:29:22.701 "ddgst": ${ddgst:-false} 00:29:22.701 }, 00:29:22.701 "method": "bdev_nvme_attach_controller" 00:29:22.701 } 00:29:22.701 EOF 00:29:22.701 )") 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:22.701 "params": { 00:29:22.701 "name": "Nvme0", 00:29:22.701 "trtype": "tcp", 00:29:22.701 "traddr": "10.0.0.2", 00:29:22.701 "adrfam": "ipv4", 00:29:22.701 "trsvcid": "4420", 00:29:22.701 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:22.701 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:22.701 "hdgst": false, 00:29:22.701 "ddgst": false 00:29:22.701 }, 00:29:22.701 "method": "bdev_nvme_attach_controller" 00:29:22.701 },{ 00:29:22.701 "params": { 00:29:22.701 "name": "Nvme1", 00:29:22.701 "trtype": "tcp", 00:29:22.701 "traddr": "10.0.0.2", 00:29:22.701 "adrfam": "ipv4", 00:29:22.701 "trsvcid": "4420", 00:29:22.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:22.701 "hdgst": false, 00:29:22.701 "ddgst": false 00:29:22.701 }, 00:29:22.701 "method": "bdev_nvme_attach_controller" 00:29:22.701 },{ 00:29:22.701 "params": { 00:29:22.701 "name": "Nvme2", 00:29:22.701 "trtype": "tcp", 00:29:22.701 "traddr": "10.0.0.2", 00:29:22.701 "adrfam": "ipv4", 00:29:22.701 "trsvcid": "4420", 00:29:22.701 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:22.701 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:22.701 "hdgst": false, 00:29:22.701 "ddgst": false 00:29:22.701 }, 00:29:22.701 "method": "bdev_nvme_attach_controller" 00:29:22.701 }' 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:22.701 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:22.701 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:22.701 ... 00:29:22.701 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:22.701 ... 00:29:22.701 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:22.701 ... 00:29:22.701 fio-3.35 00:29:22.701 Starting 24 threads 00:29:22.701 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.890 00:29:34.890 filename0: (groupid=0, jobs=1): err= 0: pid=2724972: Fri Jul 12 14:34:25 2024 00:29:34.890 read: IOPS=573, BW=2293KiB/s (2348kB/s)(22.4MiB/10020msec) 00:29:34.890 slat (nsec): min=6930, max=56423, avg=16104.55, stdev=5082.89 00:29:34.890 clat (usec): min=5582, max=29509, avg=27759.82, stdev=1487.96 00:29:34.890 lat (usec): min=5600, max=29522, avg=27775.93, stdev=1487.70 00:29:34.890 clat percentiles (usec): 00:29:34.890 | 1.00th=[20841], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:34.890 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:34.890 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:34.890 | 99.00th=[28705], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:29:34.890 | 99.99th=[29492] 00:29:34.890 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2291.20, stdev=57.24, samples=20 00:29:34.890 iops : min= 544, max= 608, avg=572.80, stdev=14.31, samples=20 00:29:34.890 lat (msec) : 10=0.28%, 20=0.56%, 50=99.16% 00:29:34.890 cpu : usr=98.77%, sys=0.84%, ctx=19, majf=0, minf=55 00:29:34.890 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:34.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.890 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.890 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.890 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.890 filename0: (groupid=0, jobs=1): err= 0: pid=2724973: Fri Jul 12 14:34:25 2024 00:29:34.890 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:29:34.890 slat (usec): min=7, max=102, avg=35.49, stdev=23.67 00:29:34.890 clat (usec): min=24389, max=41304, avg=27832.86, stdev=800.88 00:29:34.890 lat (usec): min=24399, max=41336, avg=27868.35, stdev=794.89 00:29:34.890 clat percentiles (usec): 00:29:34.891 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:29:34.891 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:34.891 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:29:34.891 | 99.00th=[28705], 99.50th=[31327], 99.90th=[40633], 99.95th=[40633], 00:29:34.891 | 99.99th=[41157] 00:29:34.891 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:29:34.891 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:29:34.891 lat (msec) : 50=100.00% 00:29:34.891 cpu : usr=98.50%, sys=1.11%, ctx=24, majf=0, minf=64 00:29:34.891 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:34.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.891 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.891 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.891 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.891 filename0: (groupid=0, jobs=1): err= 0: pid=2724974: Fri Jul 12 14:34:25 2024 00:29:34.891 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:29:34.891 slat (nsec): min=10732, max=99118, avg=46228.06, stdev=21135.19 00:29:34.891 clat (usec): min=25428, max=41135, avg=27663.36, stdev=805.02 00:29:34.891 lat (usec): min=25483, max=41151, avg=27709.59, stdev=805.20 00:29:34.891 clat percentiles (usec): 00:29:34.891 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:34.891 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:29:34.891 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:34.891 | 99.00th=[28705], 99.50th=[30540], 99.90th=[41157], 99.95th=[41157], 00:29:34.891 | 99.99th=[41157] 00:29:34.891 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:29:34.891 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:29:34.891 lat (msec) : 50=100.00% 00:29:34.891 cpu : usr=98.88%, sys=0.75%, ctx=11, majf=0, minf=54 00:29:34.891 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:34.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.891 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.891 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.891 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.891 filename0: (groupid=0, jobs=1): err= 0: pid=2724975: Fri Jul 12 14:34:25 2024 00:29:34.891 read: IOPS=570, BW=2284KiB/s (2339kB/s)(22.3MiB/10004msec) 00:29:34.891 slat (nsec): min=5731, max=84754, avg=17144.85, stdev=5914.21 00:29:34.891 clat (usec): min=10786, max=48591, avg=27873.74, stdev=2715.87 00:29:34.891 lat (usec): min=10800, max=48606, avg=27890.89, stdev=2715.88 00:29:34.891 clat percentiles (usec): 00:29:34.891 | 1.00th=[12125], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:34.891 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:34.891 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:29:34.891 | 99.00th=[43779], 99.50th=[44827], 99.90th=[47449], 99.95th=[47973], 00:29:34.891 | 99.99th=[48497] 00:29:34.891 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2270.32, stdev=57.91, samples=19 00:29:34.891 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:29:34.891 lat (msec) : 20=1.47%, 50=98.53% 00:29:34.891 cpu : usr=98.77%, sys=0.85%, ctx=14, majf=0, minf=71 00:29:34.891 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:29:34.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.891 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.891 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.891 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.891 filename0: (groupid=0, jobs=1): err= 0: pid=2724976: Fri Jul 12 14:34:25 2024 00:29:34.891 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:29:34.891 slat (usec): min=6, max=102, avg=45.21, stdev=21.22 00:29:34.891 clat (usec): min=22392, max=40934, avg=27661.44, stdev=811.82 00:29:34.891 lat (usec): min=22407, max=40948, avg=27706.65, stdev=812.72 00:29:34.891 clat percentiles (usec): 00:29:34.891 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:34.891 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:29:34.891 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:34.891 | 99.00th=[28705], 99.50th=[30802], 99.90th=[40633], 99.95th=[41157], 00:29:34.891 | 99.99th=[41157] 00:29:34.891 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:29:34.891 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:29:34.891 lat (msec) : 50=100.00% 00:29:34.891 cpu : usr=98.83%, sys=0.79%, ctx=13, majf=0, minf=57 00:29:34.891 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:34.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.891 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.891 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.891 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.891 filename0: (groupid=0, jobs=1): err= 0: pid=2724977: Fri Jul 12 14:34:25 2024 00:29:34.891 read: IOPS=571, BW=2285KiB/s (2340kB/s)(22.3MiB/10011msec) 00:29:34.891 slat (nsec): min=6457, max=42945, avg=17599.77, stdev=5168.24 00:29:34.891 clat (usec): min=10841, max=48133, avg=27857.30, stdev=2224.53 00:29:34.891 lat (usec): min=10853, max=48150, avg=27874.90, stdev=2224.46 00:29:34.891 clat percentiles (usec): 00:29:34.891 | 1.00th=[14353], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:34.891 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:34.891 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:34.891 | 99.00th=[30278], 99.50th=[43779], 99.90th=[47973], 99.95th=[47973], 00:29:34.891 | 99.99th=[47973] 00:29:34.891 bw ( KiB/s): min= 2096, max= 2304, per=4.15%, avg=2272.84, stdev=64.11, samples=19 00:29:34.891 iops : min= 524, max= 576, avg=568.21, stdev=16.03, samples=19 00:29:34.891 lat (msec) : 20=1.03%, 50=98.97% 00:29:34.891 cpu : usr=98.73%, sys=0.90%, ctx=20, majf=0, minf=75 00:29:34.891 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:29:34.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.891 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.891 issued rwts: total=5718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.891 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.891 filename0: (groupid=0, jobs=1): err= 0: pid=2724978: Fri Jul 12 14:34:25 2024 00:29:34.891 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10006msec) 00:29:34.891 slat (nsec): min=4244, max=37789, avg=17302.89, stdev=5209.95 00:29:34.891 clat (usec): min=6394, max=47574, avg=27878.42, stdev=2371.81 00:29:34.891 lat (usec): min=6402, max=47588, avg=27895.72, stdev=2371.66 00:29:34.891 clat percentiles (usec): 00:29:34.891 | 1.00th=[14222], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:29:34.891 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:34.891 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:34.891 | 99.00th=[30278], 99.50th=[43779], 99.90th=[47449], 99.95th=[47449], 00:29:34.891 | 99.99th=[47449] 00:29:34.891 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2270.32, stdev=56.16, samples=19 00:29:34.891 iops : min= 544, max= 576, avg=567.58, stdev=14.04, samples=19 00:29:34.891 lat (msec) : 10=0.04%, 20=1.12%, 50=98.84% 00:29:34.891 cpu : usr=98.81%, sys=0.81%, ctx=18, majf=0, minf=55 00:29:34.891 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:29:34.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.891 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.891 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.891 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.891 filename0: (groupid=0, jobs=1): err= 0: pid=2724979: Fri Jul 12 14:34:25 2024 00:29:34.891 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:29:34.891 slat (nsec): min=9362, max=87168, avg=35348.58, stdev=10971.43 00:29:34.891 clat (usec): min=22360, max=41276, avg=27796.03, stdev=776.38 00:29:34.891 lat (usec): min=22388, max=41308, avg=27831.38, stdev=775.18 00:29:34.891 clat percentiles (usec): 00:29:34.891 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:29:34.891 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:29:34.891 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:29:34.891 | 99.00th=[28705], 99.50th=[30802], 99.90th=[40633], 99.95th=[40633], 00:29:34.891 | 99.99th=[41157] 00:29:34.891 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:29:34.891 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:29:34.891 lat (msec) : 50=100.00% 00:29:34.891 cpu : usr=98.58%, sys=0.80%, ctx=93, majf=0, minf=44 00:29:34.891 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:34.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.891 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.891 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.891 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.891 filename1: (groupid=0, jobs=1): err= 0: pid=2724980: Fri Jul 12 14:34:25 2024 00:29:34.891 read: IOPS=573, BW=2296KiB/s (2351kB/s)(22.4MiB/10004msec) 00:29:34.891 slat (nsec): min=5773, max=95543, avg=17856.10, stdev=16801.58 00:29:34.891 clat (usec): min=6306, max=62470, avg=27787.39, stdev=3635.94 00:29:34.891 lat (usec): min=6313, max=62486, avg=27805.25, stdev=3634.55 00:29:34.891 clat percentiles (usec): 00:29:34.891 | 1.00th=[18482], 5.00th=[20579], 10.00th=[24511], 20.00th=[27657], 00:29:34.891 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:34.891 | 70.00th=[27919], 80.00th=[28181], 90.00th=[30278], 95.00th=[32113], 00:29:34.891 | 99.00th=[36963], 99.50th=[36963], 99.90th=[62653], 99.95th=[62653], 00:29:34.891 | 99.99th=[62653] 00:29:34.891 bw ( KiB/s): min= 2048, max= 2368, per=4.17%, avg=2285.47, stdev=70.61, samples=19 00:29:34.891 iops : min= 512, max= 592, avg=571.37, stdev=17.65, samples=19 00:29:34.891 lat (msec) : 10=0.21%, 20=3.40%, 50=96.12%, 100=0.28% 00:29:34.891 cpu : usr=98.85%, sys=0.77%, ctx=14, majf=0, minf=88 00:29:34.891 IO depths : 1=1.3%, 2=2.6%, 4=6.9%, 8=74.6%, 16=14.6%, 32=0.0%, >=64=0.0% 00:29:34.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.891 complete : 0=0.0%, 4=90.0%, 8=7.6%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.891 issued rwts: total=5742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.891 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.891 filename1: (groupid=0, jobs=1): err= 0: pid=2724981: Fri Jul 12 14:34:25 2024 00:29:34.891 read: IOPS=571, BW=2285KiB/s (2340kB/s)(22.3MiB/10004msec) 00:29:34.891 slat (usec): min=5, max=102, avg=40.29, stdev=23.22 00:29:34.891 clat (usec): min=10377, max=48551, avg=27597.03, stdev=2262.80 00:29:34.891 lat (usec): min=10391, max=48568, avg=27637.32, stdev=2264.25 00:29:34.891 clat percentiles (usec): 00:29:34.891 | 1.00th=[17171], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:29:34.891 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:29:34.892 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:34.892 | 99.00th=[34866], 99.50th=[44303], 99.90th=[48497], 99.95th=[48497], 00:29:34.892 | 99.99th=[48497] 00:29:34.892 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.89, stdev=48.94, samples=19 00:29:34.892 iops : min= 544, max= 576, avg=569.47, stdev=12.24, samples=19 00:29:34.892 lat (msec) : 20=1.14%, 50=98.86% 00:29:34.892 cpu : usr=98.81%, sys=0.82%, ctx=16, majf=0, minf=75 00:29:34.892 IO depths : 1=5.9%, 2=11.9%, 4=24.3%, 8=51.2%, 16=6.7%, 32=0.0%, >=64=0.0% 00:29:34.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.892 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.892 issued rwts: total=5714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.892 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.892 filename1: (groupid=0, jobs=1): err= 0: pid=2724982: Fri Jul 12 14:34:25 2024 00:29:34.892 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:29:34.892 slat (usec): min=9, max=100, avg=44.74, stdev=21.25 00:29:34.892 clat (usec): min=23740, max=40879, avg=27663.79, stdev=800.89 00:29:34.892 lat (usec): min=23749, max=40896, avg=27708.54, stdev=801.90 00:29:34.892 clat percentiles (usec): 00:29:34.892 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:34.892 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:29:34.892 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:34.892 | 99.00th=[28705], 99.50th=[30802], 99.90th=[40633], 99.95th=[40633], 00:29:34.892 | 99.99th=[40633] 00:29:34.892 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:29:34.892 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:29:34.892 lat (msec) : 50=100.00% 00:29:34.892 cpu : usr=98.77%, sys=0.85%, ctx=20, majf=0, minf=40 00:29:34.892 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:34.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.892 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.892 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.892 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.892 filename1: (groupid=0, jobs=1): err= 0: pid=2724983: Fri Jul 12 14:34:25 2024 00:29:34.892 read: IOPS=573, BW=2294KiB/s (2349kB/s)(22.4MiB/10007msec) 00:29:34.892 slat (usec): min=6, max=101, avg=40.45, stdev=24.26 00:29:34.892 clat (usec): min=9659, max=58565, avg=27497.35, stdev=2045.39 00:29:34.892 lat (usec): min=9681, max=58590, avg=27537.80, stdev=2048.01 00:29:34.892 clat percentiles (usec): 00:29:34.892 | 1.00th=[17695], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:34.892 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:29:34.892 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:29:34.892 | 99.00th=[30802], 99.50th=[31327], 99.90th=[49021], 99.95th=[49021], 00:29:34.892 | 99.99th=[58459] 00:29:34.892 bw ( KiB/s): min= 2176, max= 2480, per=4.17%, avg=2288.84, stdev=67.56, samples=19 00:29:34.892 iops : min= 544, max= 620, avg=572.21, stdev=16.89, samples=19 00:29:34.892 lat (msec) : 10=0.07%, 20=1.46%, 50=98.43%, 100=0.03% 00:29:34.892 cpu : usr=98.82%, sys=0.81%, ctx=14, majf=0, minf=71 00:29:34.892 IO depths : 1=5.0%, 2=10.1%, 4=20.7%, 8=55.8%, 16=8.4%, 32=0.0%, >=64=0.0% 00:29:34.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.892 complete : 0=0.0%, 4=93.2%, 8=1.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.892 issued rwts: total=5740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.892 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.892 filename1: (groupid=0, jobs=1): err= 0: pid=2724984: Fri Jul 12 14:34:25 2024 00:29:34.892 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10008msec) 00:29:34.892 slat (usec): min=6, max=100, avg=43.16, stdev=22.69 00:29:34.892 clat (usec): min=11080, max=50370, avg=27586.97, stdev=1533.18 00:29:34.892 lat (usec): min=11088, max=50386, avg=27630.13, stdev=1535.30 00:29:34.892 clat percentiles (usec): 00:29:34.892 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:29:34.892 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:29:34.892 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:34.892 | 99.00th=[28705], 99.50th=[30802], 99.90th=[45876], 99.95th=[45876], 00:29:34.892 | 99.99th=[50594] 00:29:34.892 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2270.32, stdev=57.91, samples=19 00:29:34.892 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:29:34.892 lat (msec) : 20=0.56%, 50=99.40%, 100=0.04% 00:29:34.892 cpu : usr=98.76%, sys=0.87%, ctx=12, majf=0, minf=49 00:29:34.892 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:34.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.892 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.892 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.892 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.892 filename1: (groupid=0, jobs=1): err= 0: pid=2724985: Fri Jul 12 14:34:25 2024 00:29:34.892 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:29:34.892 slat (nsec): min=7213, max=95333, avg=35531.69, stdev=21577.81 00:29:34.892 clat (usec): min=23928, max=41173, avg=27844.77, stdev=771.43 00:29:34.892 lat (usec): min=23936, max=41192, avg=27880.30, stdev=767.39 00:29:34.892 clat percentiles (usec): 00:29:34.892 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:29:34.892 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:34.892 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:34.892 | 99.00th=[28705], 99.50th=[30802], 99.90th=[40633], 99.95th=[40633], 00:29:34.892 | 99.99th=[41157] 00:29:34.892 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:29:34.892 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:29:34.892 lat (msec) : 50=100.00% 00:29:34.892 cpu : usr=98.80%, sys=0.82%, ctx=17, majf=0, minf=71 00:29:34.892 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:34.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.892 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.892 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.892 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.892 filename1: (groupid=0, jobs=1): err= 0: pid=2724986: Fri Jul 12 14:34:25 2024 00:29:34.892 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10006msec) 00:29:34.892 slat (nsec): min=4465, max=41534, avg=17152.27, stdev=5498.17 00:29:34.892 clat (usec): min=10871, max=49394, avg=27789.28, stdev=2482.72 00:29:34.892 lat (usec): min=10878, max=49407, avg=27806.43, stdev=2482.60 00:29:34.892 clat percentiles (usec): 00:29:34.892 | 1.00th=[14353], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:34.892 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:34.892 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:34.892 | 99.00th=[31065], 99.50th=[47973], 99.90th=[49546], 99.95th=[49546], 00:29:34.892 | 99.99th=[49546] 00:29:34.892 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.89, stdev=52.04, samples=19 00:29:34.892 iops : min= 544, max= 576, avg=569.47, stdev=13.01, samples=19 00:29:34.892 lat (msec) : 20=1.33%, 50=98.67% 00:29:34.892 cpu : usr=98.69%, sys=0.92%, ctx=10, majf=0, minf=49 00:29:34.892 IO depths : 1=5.4%, 2=10.9%, 4=22.1%, 8=54.0%, 16=7.7%, 32=0.0%, >=64=0.0% 00:29:34.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.892 complete : 0=0.0%, 4=93.5%, 8=1.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.892 issued rwts: total=5730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.892 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.892 filename1: (groupid=0, jobs=1): err= 0: pid=2724987: Fri Jul 12 14:34:25 2024 00:29:34.892 read: IOPS=579, BW=2317KiB/s (2373kB/s)(22.7MiB/10027msec) 00:29:34.892 slat (nsec): min=3287, max=49932, avg=14639.23, stdev=4782.86 00:29:34.892 clat (usec): min=2334, max=34623, avg=27495.53, stdev=3065.10 00:29:34.892 lat (usec): min=2341, max=34641, avg=27510.17, stdev=3065.11 00:29:34.892 clat percentiles (usec): 00:29:34.892 | 1.00th=[ 5145], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:34.892 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:34.892 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:29:34.892 | 99.00th=[28705], 99.50th=[29492], 99.90th=[34341], 99.95th=[34341], 00:29:34.892 | 99.99th=[34866] 00:29:34.892 bw ( KiB/s): min= 2176, max= 2949, per=4.23%, avg=2317.05, stdev=155.90, samples=20 00:29:34.892 iops : min= 544, max= 737, avg=579.25, stdev=38.92, samples=20 00:29:34.892 lat (msec) : 4=0.59%, 10=1.07%, 20=0.28%, 50=98.07% 00:29:34.892 cpu : usr=98.58%, sys=1.04%, ctx=18, majf=0, minf=84 00:29:34.892 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:29:34.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.892 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.892 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.892 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.892 filename2: (groupid=0, jobs=1): err= 0: pid=2724988: Fri Jul 12 14:34:25 2024 00:29:34.892 read: IOPS=577, BW=2311KiB/s (2366kB/s)(22.6MiB/10026msec) 00:29:34.892 slat (nsec): min=4209, max=52449, avg=15867.77, stdev=4349.61 00:29:34.892 clat (usec): min=4739, max=34920, avg=27555.58, stdev=2864.34 00:29:34.892 lat (usec): min=4748, max=34935, avg=27571.44, stdev=2864.40 00:29:34.892 clat percentiles (usec): 00:29:34.892 | 1.00th=[ 6194], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:34.892 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:34.892 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:29:34.892 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:29:34.892 | 99.99th=[34866] 00:29:34.892 bw ( KiB/s): min= 2176, max= 2816, per=4.22%, avg=2310.40, stdev=127.83, samples=20 00:29:34.892 iops : min= 544, max= 704, avg=577.60, stdev=31.96, samples=20 00:29:34.892 lat (msec) : 10=1.38%, 20=0.28%, 50=98.34% 00:29:34.892 cpu : usr=98.79%, sys=0.83%, ctx=15, majf=0, minf=69 00:29:34.892 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:29:34.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.892 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.892 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.892 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.892 filename2: (groupid=0, jobs=1): err= 0: pid=2724989: Fri Jul 12 14:34:25 2024 00:29:34.892 read: IOPS=573, BW=2292KiB/s (2347kB/s)(22.4MiB/10024msec) 00:29:34.892 slat (usec): min=6, max=100, avg=22.57, stdev=20.45 00:29:34.892 clat (usec): min=5584, max=32279, avg=27752.39, stdev=1542.80 00:29:34.892 lat (usec): min=5600, max=32291, avg=27774.97, stdev=1540.78 00:29:34.892 clat percentiles (usec): 00:29:34.892 | 1.00th=[20841], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:29:34.892 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:34.892 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:29:34.892 | 99.00th=[28705], 99.50th=[29492], 99.90th=[31327], 99.95th=[31327], 00:29:34.893 | 99.99th=[32375] 00:29:34.893 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2290.53, stdev=58.73, samples=19 00:29:34.893 iops : min= 544, max= 608, avg=572.63, stdev=14.68, samples=19 00:29:34.893 lat (msec) : 10=0.28%, 20=0.56%, 50=99.16% 00:29:34.893 cpu : usr=98.55%, sys=1.08%, ctx=20, majf=0, minf=81 00:29:34.893 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:34.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.893 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.893 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.893 filename2: (groupid=0, jobs=1): err= 0: pid=2724990: Fri Jul 12 14:34:25 2024 00:29:34.893 read: IOPS=570, BW=2282KiB/s (2337kB/s)(22.3MiB/10008msec) 00:29:34.893 slat (usec): min=6, max=104, avg=41.95, stdev=22.93 00:29:34.893 clat (usec): min=11892, max=50485, avg=27636.93, stdev=1766.24 00:29:34.893 lat (usec): min=11904, max=50501, avg=27678.88, stdev=1768.50 00:29:34.893 clat percentiles (usec): 00:29:34.893 | 1.00th=[22676], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:34.893 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:29:34.893 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:34.893 | 99.00th=[32637], 99.50th=[33162], 99.90th=[45876], 99.95th=[45876], 00:29:34.893 | 99.99th=[50594] 00:29:34.893 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2270.32, stdev=53.90, samples=19 00:29:34.893 iops : min= 544, max= 576, avg=567.58, stdev=13.48, samples=19 00:29:34.893 lat (msec) : 20=0.53%, 50=99.44%, 100=0.04% 00:29:34.893 cpu : usr=98.76%, sys=0.87%, ctx=16, majf=0, minf=55 00:29:34.893 IO depths : 1=4.5%, 2=10.4%, 4=23.9%, 8=53.2%, 16=8.0%, 32=0.0%, >=64=0.0% 00:29:34.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.893 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.893 issued rwts: total=5710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.893 filename2: (groupid=0, jobs=1): err= 0: pid=2724991: Fri Jul 12 14:34:25 2024 00:29:34.893 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:29:34.893 slat (nsec): min=7012, max=96007, avg=30376.62, stdev=20301.31 00:29:34.893 clat (usec): min=21816, max=41204, avg=27864.40, stdev=1003.00 00:29:34.893 lat (usec): min=21829, max=41229, avg=27894.78, stdev=1001.00 00:29:34.893 clat percentiles (usec): 00:29:34.893 | 1.00th=[24511], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:29:34.893 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:34.893 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:29:34.893 | 99.00th=[31589], 99.50th=[32113], 99.90th=[40633], 99.95th=[40633], 00:29:34.893 | 99.99th=[41157] 00:29:34.893 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:29:34.893 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:29:34.893 lat (msec) : 50=100.00% 00:29:34.893 cpu : usr=98.76%, sys=0.87%, ctx=13, majf=0, minf=68 00:29:34.893 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:29:34.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.893 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.893 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.893 filename2: (groupid=0, jobs=1): err= 0: pid=2724992: Fri Jul 12 14:34:25 2024 00:29:34.893 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:29:34.893 slat (usec): min=8, max=103, avg=46.04, stdev=21.46 00:29:34.893 clat (usec): min=25945, max=40818, avg=27690.61, stdev=792.23 00:29:34.893 lat (usec): min=25979, max=40833, avg=27736.65, stdev=790.96 00:29:34.893 clat percentiles (usec): 00:29:34.893 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:34.893 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:34.893 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:34.893 | 99.00th=[28705], 99.50th=[30802], 99.90th=[40633], 99.95th=[40633], 00:29:34.893 | 99.99th=[40633] 00:29:34.893 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2277.05, stdev=53.61, samples=19 00:29:34.893 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:29:34.893 lat (msec) : 50=100.00% 00:29:34.893 cpu : usr=98.85%, sys=0.78%, ctx=9, majf=0, minf=53 00:29:34.893 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:34.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.893 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.893 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.893 filename2: (groupid=0, jobs=1): err= 0: pid=2724993: Fri Jul 12 14:34:25 2024 00:29:34.893 read: IOPS=580, BW=2321KiB/s (2376kB/s)(22.7MiB/10011msec) 00:29:34.893 slat (nsec): min=4207, max=49005, avg=12379.28, stdev=4903.45 00:29:34.893 clat (usec): min=3425, max=44150, avg=27473.25, stdev=3132.61 00:29:34.893 lat (usec): min=3435, max=44165, avg=27485.63, stdev=3132.41 00:29:34.893 clat percentiles (usec): 00:29:34.893 | 1.00th=[ 5014], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:29:34.893 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:34.893 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:29:34.893 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:29:34.893 | 99.99th=[44303] 00:29:34.893 bw ( KiB/s): min= 2176, max= 2944, per=4.23%, avg=2316.80, stdev=154.83, samples=20 00:29:34.893 iops : min= 544, max= 736, avg=579.20, stdev=38.71, samples=20 00:29:34.893 lat (msec) : 4=0.28%, 10=1.38%, 20=0.60%, 50=97.74% 00:29:34.893 cpu : usr=98.80%, sys=0.83%, ctx=12, majf=0, minf=101 00:29:34.893 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:34.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.893 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.893 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.893 filename2: (groupid=0, jobs=1): err= 0: pid=2724994: Fri Jul 12 14:34:25 2024 00:29:34.893 read: IOPS=569, BW=2278KiB/s (2333kB/s)(22.2MiB/10002msec) 00:29:34.893 slat (nsec): min=7575, max=99151, avg=45196.92, stdev=22089.35 00:29:34.893 clat (usec): min=13655, max=51477, avg=27654.10, stdev=1588.78 00:29:34.893 lat (usec): min=13670, max=51515, avg=27699.30, stdev=1590.03 00:29:34.893 clat percentiles (usec): 00:29:34.893 | 1.00th=[24511], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:34.893 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:29:34.893 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:34.893 | 99.00th=[31065], 99.50th=[31327], 99.90th=[51119], 99.95th=[51643], 00:29:34.893 | 99.99th=[51643] 00:29:34.893 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2270.32, stdev=70.53, samples=19 00:29:34.893 iops : min= 512, max= 576, avg=567.58, stdev=17.63, samples=19 00:29:34.893 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:29:34.893 cpu : usr=98.68%, sys=0.92%, ctx=20, majf=0, minf=59 00:29:34.893 IO depths : 1=5.5%, 2=11.7%, 4=24.9%, 8=50.9%, 16=7.0%, 32=0.0%, >=64=0.0% 00:29:34.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.893 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.893 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.893 filename2: (groupid=0, jobs=1): err= 0: pid=2724995: Fri Jul 12 14:34:25 2024 00:29:34.893 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10003msec) 00:29:34.893 slat (nsec): min=6833, max=95847, avg=24270.44, stdev=17878.74 00:29:34.893 clat (usec): min=9814, max=47516, avg=27718.78, stdev=2519.75 00:29:34.893 lat (usec): min=9827, max=47530, avg=27743.05, stdev=2519.51 00:29:34.893 clat percentiles (usec): 00:29:34.893 | 1.00th=[19530], 5.00th=[25035], 10.00th=[27395], 20.00th=[27395], 00:29:34.893 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:34.893 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28967], 00:29:34.893 | 99.00th=[36439], 99.50th=[40109], 99.90th=[47449], 99.95th=[47449], 00:29:34.893 | 99.99th=[47449] 00:29:34.893 bw ( KiB/s): min= 2176, max= 2352, per=4.17%, avg=2284.63, stdev=52.68, samples=19 00:29:34.893 iops : min= 544, max= 588, avg=571.16, stdev=13.17, samples=19 00:29:34.893 lat (msec) : 10=0.07%, 20=1.64%, 50=98.29% 00:29:34.893 cpu : usr=98.77%, sys=0.86%, ctx=11, majf=0, minf=61 00:29:34.893 IO depths : 1=4.5%, 2=9.1%, 4=19.2%, 8=58.2%, 16=9.1%, 32=0.0%, >=64=0.0% 00:29:34.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.893 complete : 0=0.0%, 4=92.7%, 8=2.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.893 issued rwts: total=5730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:34.893 00:29:34.893 Run status group 0 (all jobs): 00:29:34.893 READ: bw=53.5MiB/s (56.1MB/s), 2277KiB/s-2321KiB/s (2332kB/s-2376kB/s), io=537MiB (563MB), run=10002-10027msec 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.893 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:34.894 bdev_null0 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:34.894 [2024-07-12 14:34:25.515242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:34.894 bdev_null1 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:34.894 { 00:29:34.894 "params": { 00:29:34.894 "name": "Nvme$subsystem", 00:29:34.894 "trtype": "$TEST_TRANSPORT", 00:29:34.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.894 "adrfam": "ipv4", 00:29:34.894 "trsvcid": "$NVMF_PORT", 00:29:34.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.894 "hdgst": ${hdgst:-false}, 00:29:34.894 "ddgst": ${ddgst:-false} 00:29:34.894 }, 00:29:34.894 "method": "bdev_nvme_attach_controller" 00:29:34.894 } 00:29:34.894 EOF 00:29:34.894 )") 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:34.894 { 00:29:34.894 "params": { 00:29:34.894 "name": "Nvme$subsystem", 00:29:34.894 "trtype": "$TEST_TRANSPORT", 00:29:34.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.894 "adrfam": "ipv4", 00:29:34.894 "trsvcid": "$NVMF_PORT", 00:29:34.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.894 "hdgst": ${hdgst:-false}, 00:29:34.894 "ddgst": ${ddgst:-false} 00:29:34.894 }, 00:29:34.894 "method": "bdev_nvme_attach_controller" 00:29:34.894 } 00:29:34.894 EOF 00:29:34.894 )") 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:34.894 "params": { 00:29:34.894 "name": "Nvme0", 00:29:34.894 "trtype": "tcp", 00:29:34.894 "traddr": "10.0.0.2", 00:29:34.894 "adrfam": "ipv4", 00:29:34.894 "trsvcid": "4420", 00:29:34.894 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:34.894 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:34.894 "hdgst": false, 00:29:34.894 "ddgst": false 00:29:34.894 }, 00:29:34.894 "method": "bdev_nvme_attach_controller" 00:29:34.894 },{ 00:29:34.894 "params": { 00:29:34.894 "name": "Nvme1", 00:29:34.894 "trtype": "tcp", 00:29:34.894 "traddr": "10.0.0.2", 00:29:34.894 "adrfam": "ipv4", 00:29:34.894 "trsvcid": "4420", 00:29:34.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:34.894 "hdgst": false, 00:29:34.894 "ddgst": false 00:29:34.894 }, 00:29:34.894 "method": "bdev_nvme_attach_controller" 00:29:34.894 }' 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:34.894 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:34.895 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:34.895 14:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:34.895 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:34.895 ... 00:29:34.895 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:34.895 ... 00:29:34.895 fio-3.35 00:29:34.895 Starting 4 threads 00:29:34.895 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.162 00:29:40.162 filename0: (groupid=0, jobs=1): err= 0: pid=2726941: Fri Jul 12 14:34:31 2024 00:29:40.162 read: IOPS=2723, BW=21.3MiB/s (22.3MB/s)(106MiB/5003msec) 00:29:40.162 slat (nsec): min=4215, max=67747, avg=11344.73, stdev=6930.62 00:29:40.162 clat (usec): min=646, max=6945, avg=2904.24, stdev=504.55 00:29:40.162 lat (usec): min=655, max=6965, avg=2915.59, stdev=504.76 00:29:40.162 clat percentiles (usec): 00:29:40.162 | 1.00th=[ 1696], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2540], 00:29:40.162 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 2999], 00:29:40.162 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3425], 95.00th=[ 3785], 00:29:40.162 | 99.00th=[ 4490], 99.50th=[ 4752], 99.90th=[ 5276], 99.95th=[ 6194], 00:29:40.162 | 99.99th=[ 6259] 00:29:40.162 bw ( KiB/s): min=20768, max=22608, per=26.05%, avg=21752.89, stdev=498.90, samples=9 00:29:40.162 iops : min= 2596, max= 2826, avg=2719.11, stdev=62.36, samples=9 00:29:40.162 lat (usec) : 750=0.05%, 1000=0.25% 00:29:40.162 lat (msec) : 2=1.84%, 4=94.42%, 10=3.44% 00:29:40.162 cpu : usr=96.86%, sys=2.80%, ctx=7, majf=0, minf=0 00:29:40.162 IO depths : 1=0.2%, 2=4.9%, 4=65.7%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:40.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.162 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.162 issued rwts: total=13626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:40.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:40.162 filename0: (groupid=0, jobs=1): err= 0: pid=2726942: Fri Jul 12 14:34:31 2024 00:29:40.162 read: IOPS=2516, BW=19.7MiB/s (20.6MB/s)(98.3MiB/5002msec) 00:29:40.162 slat (nsec): min=6072, max=61100, avg=11448.30, stdev=7343.64 00:29:40.162 clat (usec): min=726, max=5573, avg=3146.13, stdev=513.79 00:29:40.162 lat (usec): min=738, max=5580, avg=3157.58, stdev=513.26 00:29:40.162 clat percentiles (usec): 00:29:40.162 | 1.00th=[ 2040], 5.00th=[ 2442], 10.00th=[ 2638], 20.00th=[ 2835], 00:29:40.162 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3130], 00:29:40.162 | 70.00th=[ 3261], 80.00th=[ 3392], 90.00th=[ 3720], 95.00th=[ 4228], 00:29:40.162 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 5276], 99.95th=[ 5407], 00:29:40.162 | 99.99th=[ 5538] 00:29:40.162 bw ( KiB/s): min=19440, max=20752, per=24.13%, avg=20151.11, stdev=410.74, samples=9 00:29:40.162 iops : min= 2430, max= 2594, avg=2518.89, stdev=51.34, samples=9 00:29:40.162 lat (usec) : 750=0.01% 00:29:40.162 lat (msec) : 2=0.84%, 4=92.17%, 10=6.98% 00:29:40.162 cpu : usr=96.94%, sys=2.74%, ctx=15, majf=0, minf=9 00:29:40.162 IO depths : 1=0.1%, 2=2.1%, 4=70.5%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:40.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.162 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.162 issued rwts: total=12586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:40.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:40.162 filename1: (groupid=0, jobs=1): err= 0: pid=2726943: Fri Jul 12 14:34:31 2024 00:29:40.162 read: IOPS=2543, BW=19.9MiB/s (20.8MB/s)(99.4MiB/5001msec) 00:29:40.162 slat (nsec): min=6178, max=69125, avg=14051.39, stdev=9196.43 00:29:40.162 clat (usec): min=613, max=5908, avg=3102.43, stdev=512.88 00:29:40.162 lat (usec): min=622, max=5929, avg=3116.49, stdev=512.39 00:29:40.162 clat percentiles (usec): 00:29:40.162 | 1.00th=[ 2024], 5.00th=[ 2376], 10.00th=[ 2573], 20.00th=[ 2769], 00:29:40.162 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3097], 00:29:40.162 | 70.00th=[ 3228], 80.00th=[ 3359], 90.00th=[ 3720], 95.00th=[ 4113], 00:29:40.162 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 5276], 99.95th=[ 5342], 00:29:40.162 | 99.99th=[ 5473] 00:29:40.162 bw ( KiB/s): min=19216, max=20928, per=24.45%, avg=20417.78, stdev=504.60, samples=9 00:29:40.162 iops : min= 2402, max= 2616, avg=2552.22, stdev=63.07, samples=9 00:29:40.162 lat (usec) : 750=0.01%, 1000=0.02% 00:29:40.162 lat (msec) : 2=0.90%, 4=92.88%, 10=6.19% 00:29:40.162 cpu : usr=96.74%, sys=2.68%, ctx=151, majf=0, minf=9 00:29:40.162 IO depths : 1=0.1%, 2=4.3%, 4=67.7%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:40.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.162 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.162 issued rwts: total=12722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:40.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:40.162 filename1: (groupid=0, jobs=1): err= 0: pid=2726944: Fri Jul 12 14:34:31 2024 00:29:40.162 read: IOPS=2658, BW=20.8MiB/s (21.8MB/s)(104MiB/5001msec) 00:29:40.162 slat (nsec): min=6095, max=67611, avg=11297.51, stdev=6885.25 00:29:40.162 clat (usec): min=1150, max=5523, avg=2977.11, stdev=458.71 00:29:40.162 lat (usec): min=1158, max=5529, avg=2988.41, stdev=458.77 00:29:40.162 clat percentiles (usec): 00:29:40.162 | 1.00th=[ 1958], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2638], 00:29:40.162 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 2999], 60.00th=[ 3032], 00:29:40.162 | 70.00th=[ 3097], 80.00th=[ 3261], 90.00th=[ 3490], 95.00th=[ 3785], 00:29:40.162 | 99.00th=[ 4490], 99.50th=[ 4621], 99.90th=[ 5080], 99.95th=[ 5276], 00:29:40.162 | 99.99th=[ 5538] 00:29:40.162 bw ( KiB/s): min=20560, max=22096, per=25.35%, avg=21175.11, stdev=565.30, samples=9 00:29:40.162 iops : min= 2570, max= 2762, avg=2646.89, stdev=70.66, samples=9 00:29:40.162 lat (msec) : 2=1.32%, 4=95.36%, 10=3.32% 00:29:40.162 cpu : usr=97.24%, sys=2.44%, ctx=9, majf=0, minf=9 00:29:40.162 IO depths : 1=0.2%, 2=3.8%, 4=66.3%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:40.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.162 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.162 issued rwts: total=13295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:40.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:40.162 00:29:40.162 Run status group 0 (all jobs): 00:29:40.162 READ: bw=81.6MiB/s (85.5MB/s), 19.7MiB/s-21.3MiB/s (20.6MB/s-22.3MB/s), io=408MiB (428MB), run=5001-5003msec 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.162 00:29:40.162 real 0m24.246s 00:29:40.162 user 4m52.480s 00:29:40.162 sys 0m4.375s 00:29:40.162 14:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:40.163 14:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:40.163 ************************************ 00:29:40.163 END TEST fio_dif_rand_params 00:29:40.163 ************************************ 00:29:40.163 14:34:31 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:40.163 14:34:31 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:40.163 14:34:31 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:40.163 14:34:31 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:40.163 14:34:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:40.163 ************************************ 00:29:40.163 START TEST fio_dif_digest 00:29:40.163 ************************************ 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:40.163 bdev_null0 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:40.163 14:34:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:40.163 [2024-07-12 14:34:32.013139] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:40.163 { 00:29:40.163 "params": { 00:29:40.163 "name": "Nvme$subsystem", 00:29:40.163 "trtype": "$TEST_TRANSPORT", 00:29:40.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.163 "adrfam": "ipv4", 00:29:40.163 "trsvcid": "$NVMF_PORT", 00:29:40.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.163 "hdgst": ${hdgst:-false}, 00:29:40.163 "ddgst": ${ddgst:-false} 00:29:40.163 }, 00:29:40.163 "method": "bdev_nvme_attach_controller" 00:29:40.163 } 00:29:40.163 EOF 00:29:40.163 )") 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:40.163 "params": { 00:29:40.163 "name": "Nvme0", 00:29:40.163 "trtype": "tcp", 00:29:40.163 "traddr": "10.0.0.2", 00:29:40.163 "adrfam": "ipv4", 00:29:40.163 "trsvcid": "4420", 00:29:40.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:40.163 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:40.163 "hdgst": true, 00:29:40.163 "ddgst": true 00:29:40.163 }, 00:29:40.163 "method": "bdev_nvme_attach_controller" 00:29:40.163 }' 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:40.163 14:34:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:40.422 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:40.422 ... 00:29:40.422 fio-3.35 00:29:40.422 Starting 3 threads 00:29:40.422 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.629 00:29:52.629 filename0: (groupid=0, jobs=1): err= 0: pid=2728215: Fri Jul 12 14:34:42 2024 00:29:52.629 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(366MiB/10047msec) 00:29:52.629 slat (nsec): min=6503, max=49993, avg=18241.73, stdev=7799.83 00:29:52.629 clat (usec): min=8063, max=51290, avg=10270.04, stdev=1280.15 00:29:52.629 lat (usec): min=8071, max=51300, avg=10288.29, stdev=1280.11 00:29:52.629 clat percentiles (usec): 00:29:52.629 | 1.00th=[ 8586], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9634], 00:29:52.629 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:29:52.629 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:29:52.629 | 99.00th=[11994], 99.50th=[12256], 99.90th=[13698], 99.95th=[49546], 00:29:52.629 | 99.99th=[51119] 00:29:52.629 bw ( KiB/s): min=36352, max=39168, per=34.88%, avg=37401.60, stdev=751.66, samples=20 00:29:52.629 iops : min= 284, max= 306, avg=292.20, stdev= 5.87, samples=20 00:29:52.629 lat (msec) : 10=37.06%, 20=62.87%, 50=0.03%, 100=0.03% 00:29:52.629 cpu : usr=96.95%, sys=2.75%, ctx=22, majf=0, minf=142 00:29:52.629 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:52.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.629 issued rwts: total=2925,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.629 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:52.629 filename0: (groupid=0, jobs=1): err= 0: pid=2728217: Fri Jul 12 14:34:42 2024 00:29:52.629 read: IOPS=275, BW=34.4MiB/s (36.1MB/s)(346MiB/10046msec) 00:29:52.629 slat (nsec): min=6850, max=56618, avg=23436.22, stdev=6544.74 00:29:52.629 clat (usec): min=8177, max=48006, avg=10862.58, stdev=1243.73 00:29:52.629 lat (usec): min=8202, max=48035, avg=10886.02, stdev=1244.05 00:29:52.629 clat percentiles (usec): 00:29:52.629 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:29:52.629 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:29:52.629 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:29:52.629 | 99.00th=[12911], 99.50th=[13304], 99.90th=[13960], 99.95th=[45876], 00:29:52.629 | 99.99th=[47973] 00:29:52.629 bw ( KiB/s): min=33280, max=36864, per=32.98%, avg=35357.20, stdev=760.11, samples=20 00:29:52.629 iops : min= 260, max= 288, avg=276.20, stdev= 5.91, samples=20 00:29:52.629 lat (msec) : 10=13.60%, 20=86.32%, 50=0.07% 00:29:52.629 cpu : usr=95.98%, sys=3.71%, ctx=28, majf=0, minf=161 00:29:52.630 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:52.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.630 issued rwts: total=2764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.630 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:52.630 filename0: (groupid=0, jobs=1): err= 0: pid=2728218: Fri Jul 12 14:34:42 2024 00:29:52.630 read: IOPS=271, BW=33.9MiB/s (35.6MB/s)(341MiB/10046msec) 00:29:52.630 slat (nsec): min=6778, max=54602, avg=18815.05, stdev=7769.71 00:29:52.630 clat (usec): min=7854, max=50430, avg=11014.35, stdev=1295.79 00:29:52.630 lat (usec): min=7867, max=50458, avg=11033.16, stdev=1296.06 00:29:52.630 clat percentiles (usec): 00:29:52.630 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10290], 00:29:52.630 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:29:52.630 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:29:52.630 | 99.00th=[13042], 99.50th=[13435], 99.90th=[15008], 99.95th=[46924], 00:29:52.630 | 99.99th=[50594] 00:29:52.630 bw ( KiB/s): min=32768, max=37376, per=32.53%, avg=34883.50, stdev=969.02, samples=20 00:29:52.630 iops : min= 256, max= 292, avg=272.50, stdev= 7.56, samples=20 00:29:52.630 lat (msec) : 10=9.53%, 20=90.39%, 50=0.04%, 100=0.04% 00:29:52.630 cpu : usr=97.06%, sys=2.64%, ctx=20, majf=0, minf=160 00:29:52.630 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:52.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.630 issued rwts: total=2727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.630 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:52.630 00:29:52.630 Run status group 0 (all jobs): 00:29:52.630 READ: bw=105MiB/s (110MB/s), 33.9MiB/s-36.4MiB/s (35.6MB/s-38.2MB/s), io=1052MiB (1103MB), run=10046-10047msec 00:29:52.630 14:34:43 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:52.630 14:34:43 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:52.630 14:34:43 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:52.630 14:34:43 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:52.630 14:34:43 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:52.630 14:34:43 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:52.630 14:34:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.630 14:34:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:52.630 14:34:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.630 14:34:43 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:52.630 14:34:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.630 14:34:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:52.630 14:34:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.630 00:29:52.630 real 0m11.141s 00:29:52.630 user 0m35.934s 00:29:52.630 sys 0m1.232s 00:29:52.630 14:34:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:52.630 14:34:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:52.630 ************************************ 00:29:52.630 END TEST fio_dif_digest 00:29:52.630 ************************************ 00:29:52.630 14:34:43 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:52.630 14:34:43 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:52.630 14:34:43 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:52.630 14:34:43 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:52.630 14:34:43 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:52.630 14:34:43 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:52.630 14:34:43 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:52.630 14:34:43 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:52.630 14:34:43 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:52.630 rmmod nvme_tcp 00:29:52.630 rmmod nvme_fabrics 00:29:52.630 rmmod nvme_keyring 00:29:52.630 14:34:43 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:52.630 14:34:43 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:52.630 14:34:43 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:52.630 14:34:43 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2719606 ']' 00:29:52.630 14:34:43 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2719606 00:29:52.630 14:34:43 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2719606 ']' 00:29:52.630 14:34:43 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2719606 00:29:52.630 14:34:43 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:29:52.630 14:34:43 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:52.630 14:34:43 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2719606 00:29:52.630 14:34:43 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:52.630 14:34:43 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:52.630 14:34:43 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2719606' 00:29:52.630 killing process with pid 2719606 00:29:52.630 14:34:43 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2719606 00:29:52.630 14:34:43 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2719606 00:29:52.630 14:34:43 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:52.630 14:34:43 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:54.007 Waiting for block devices as requested 00:29:54.007 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:54.007 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:54.007 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:54.007 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:54.265 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:54.265 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:54.265 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:54.265 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:54.523 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:54.524 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:54.524 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:54.524 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:54.782 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:54.782 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:54.782 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:55.041 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:55.041 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:55.041 14:34:46 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:55.041 14:34:46 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:55.041 14:34:46 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:55.041 14:34:46 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:55.041 14:34:46 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.041 14:34:46 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:55.041 14:34:46 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.623 14:34:49 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:57.623 00:29:57.623 real 1m12.602s 00:29:57.623 user 7m9.564s 00:29:57.623 sys 0m17.477s 00:29:57.623 14:34:49 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:57.623 14:34:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:57.623 ************************************ 00:29:57.623 END TEST nvmf_dif 00:29:57.623 ************************************ 00:29:57.623 14:34:49 -- common/autotest_common.sh@1142 -- # return 0 00:29:57.623 14:34:49 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:57.623 14:34:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:57.623 14:34:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:57.623 14:34:49 -- common/autotest_common.sh@10 -- # set +x 00:29:57.623 ************************************ 00:29:57.623 START TEST nvmf_abort_qd_sizes 00:29:57.623 ************************************ 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:57.623 * Looking for test storage... 00:29:57.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:29:57.623 14:34:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:02.893 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:02.893 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:02.893 Found net devices under 0000:86:00.0: cvl_0_0 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:02.893 Found net devices under 0000:86:00.1: cvl_0_1 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:02.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:30:02.893 00:30:02.893 --- 10.0.0.2 ping statistics --- 00:30:02.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.893 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:30:02.893 00:30:02.893 --- 10.0.0.1 ping statistics --- 00:30:02.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.893 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:02.893 14:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:05.420 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:05.420 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:05.420 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:05.420 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:05.420 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:05.420 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:05.420 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:05.420 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:05.420 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:05.420 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:05.420 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:05.420 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:05.420 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:05.420 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:05.420 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:05.420 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:05.987 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2735990 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2735990 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2735990 ']' 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:05.987 14:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:06.246 [2024-07-12 14:34:58.013987] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:30:06.246 [2024-07-12 14:34:58.014034] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.246 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.246 [2024-07-12 14:34:58.075486] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.246 [2024-07-12 14:34:58.157215] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.246 [2024-07-12 14:34:58.157252] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.246 [2024-07-12 14:34:58.157259] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.246 [2024-07-12 14:34:58.157265] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.246 [2024-07-12 14:34:58.157270] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.246 [2024-07-12 14:34:58.157312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.246 [2024-07-12 14:34:58.157404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.246 [2024-07-12 14:34:58.157448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.246 [2024-07-12 14:34:58.157449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.813 14:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:07.071 14:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:07.071 ************************************ 00:30:07.071 START TEST spdk_target_abort 00:30:07.071 ************************************ 00:30:07.071 14:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:30:07.071 14:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:07.071 14:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:30:07.071 14:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.071 14:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:10.357 spdk_targetn1 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:10.357 [2024-07-12 14:35:01.732056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:10.357 [2024-07-12 14:35:01.764974] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:10.357 14:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:10.357 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.642 Initializing NVMe Controllers 00:30:13.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:13.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:13.642 Initialization complete. Launching workers. 00:30:13.642 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16542, failed: 0 00:30:13.642 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1316, failed to submit 15226 00:30:13.642 success 782, unsuccess 534, failed 0 00:30:13.642 14:35:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:13.642 14:35:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:13.642 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.941 Initializing NVMe Controllers 00:30:16.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:16.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:16.941 Initialization complete. Launching workers. 00:30:16.941 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8741, failed: 0 00:30:16.941 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1236, failed to submit 7505 00:30:16.941 success 340, unsuccess 896, failed 0 00:30:16.941 14:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:16.941 14:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:16.941 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.473 Initializing NVMe Controllers 00:30:19.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:19.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:19.473 Initialization complete. Launching workers. 00:30:19.473 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38293, failed: 0 00:30:19.473 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2830, failed to submit 35463 00:30:19.473 success 586, unsuccess 2244, failed 0 00:30:19.473 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:19.473 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.473 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:19.473 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.473 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:19.473 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.473 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:20.850 14:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.850 14:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2735990 00:30:20.850 14:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2735990 ']' 00:30:20.850 14:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2735990 00:30:20.850 14:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:30:20.850 14:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:20.850 14:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2735990 00:30:20.850 14:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:20.850 14:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:20.850 14:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2735990' 00:30:20.850 killing process with pid 2735990 00:30:20.851 14:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2735990 00:30:20.851 14:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2735990 00:30:21.109 00:30:21.109 real 0m13.968s 00:30:21.109 user 0m55.705s 00:30:21.109 sys 0m2.239s 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:21.109 ************************************ 00:30:21.109 END TEST spdk_target_abort 00:30:21.109 ************************************ 00:30:21.109 14:35:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:21.109 14:35:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:21.109 14:35:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:21.109 14:35:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:21.109 14:35:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:21.109 ************************************ 00:30:21.109 START TEST kernel_target_abort 00:30:21.109 ************************************ 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:21.109 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:23.644 Waiting for block devices as requested 00:30:23.644 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:23.644 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:23.644 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:23.902 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:23.902 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:23.902 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:23.902 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:24.161 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:24.161 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:24.161 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:24.421 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:24.421 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:24.421 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:24.421 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:24.680 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:24.680 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:24.680 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:24.939 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:24.939 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:24.939 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:24.939 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:24.939 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:24.940 No valid GPT data, bailing 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:30:24.940 00:30:24.940 Discovery Log Number of Records 2, Generation counter 2 00:30:24.940 =====Discovery Log Entry 0====== 00:30:24.940 trtype: tcp 00:30:24.940 adrfam: ipv4 00:30:24.940 subtype: current discovery subsystem 00:30:24.940 treq: not specified, sq flow control disable supported 00:30:24.940 portid: 1 00:30:24.940 trsvcid: 4420 00:30:24.940 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:24.940 traddr: 10.0.0.1 00:30:24.940 eflags: none 00:30:24.940 sectype: none 00:30:24.940 =====Discovery Log Entry 1====== 00:30:24.940 trtype: tcp 00:30:24.940 adrfam: ipv4 00:30:24.940 subtype: nvme subsystem 00:30:24.940 treq: not specified, sq flow control disable supported 00:30:24.940 portid: 1 00:30:24.940 trsvcid: 4420 00:30:24.940 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:24.940 traddr: 10.0.0.1 00:30:24.940 eflags: none 00:30:24.940 sectype: none 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:24.940 14:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:24.940 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.230 Initializing NVMe Controllers 00:30:28.230 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:28.230 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:28.230 Initialization complete. Launching workers. 00:30:28.230 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 88806, failed: 0 00:30:28.230 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 88806, failed to submit 0 00:30:28.230 success 0, unsuccess 88806, failed 0 00:30:28.230 14:35:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:28.230 14:35:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:28.230 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.612 Initializing NVMe Controllers 00:30:31.612 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:31.612 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:31.612 Initialization complete. Launching workers. 00:30:31.612 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 143228, failed: 0 00:30:31.612 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35326, failed to submit 107902 00:30:31.612 success 0, unsuccess 35326, failed 0 00:30:31.612 14:35:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:31.612 14:35:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:31.612 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.898 Initializing NVMe Controllers 00:30:34.898 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:34.898 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:34.898 Initialization complete. Launching workers. 00:30:34.898 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 137689, failed: 0 00:30:34.898 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34494, failed to submit 103195 00:30:34.898 success 0, unsuccess 34494, failed 0 00:30:34.898 14:35:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:34.898 14:35:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:34.898 14:35:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:34.898 14:35:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:34.898 14:35:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:34.898 14:35:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:34.898 14:35:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:34.898 14:35:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:34.898 14:35:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:34.898 14:35:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:36.803 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:36.803 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:36.803 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:36.803 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:36.803 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:36.803 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:36.803 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:36.803 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:36.803 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:36.803 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:36.803 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:36.803 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:36.803 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:36.803 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:36.803 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:36.803 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:37.740 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:37.740 00:30:37.740 real 0m16.717s 00:30:37.740 user 0m8.551s 00:30:37.740 sys 0m4.596s 00:30:37.740 14:35:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:37.740 14:35:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:37.740 ************************************ 00:30:37.740 END TEST kernel_target_abort 00:30:37.740 ************************************ 00:30:37.740 14:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:37.740 14:35:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:37.740 14:35:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:37.740 14:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:37.740 14:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:37.740 14:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:37.740 14:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:37.740 14:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:37.740 14:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:37.740 rmmod nvme_tcp 00:30:37.740 rmmod nvme_fabrics 00:30:37.740 rmmod nvme_keyring 00:30:37.740 14:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:37.740 14:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:37.998 14:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:37.998 14:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2735990 ']' 00:30:37.998 14:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2735990 00:30:37.998 14:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2735990 ']' 00:30:37.998 14:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2735990 00:30:37.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2735990) - No such process 00:30:37.998 14:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2735990 is not found' 00:30:37.998 Process with pid 2735990 is not found 00:30:37.998 14:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:37.998 14:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:39.896 Waiting for block devices as requested 00:30:39.896 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:39.896 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:40.155 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:40.155 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:40.155 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:40.155 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:40.414 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:40.414 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:40.414 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:40.414 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:40.671 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:40.671 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:40.671 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:40.929 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:40.929 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:40.929 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:40.929 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:41.188 14:35:32 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:41.188 14:35:32 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:41.188 14:35:33 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:41.188 14:35:33 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:41.188 14:35:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.188 14:35:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:41.188 14:35:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.094 14:35:35 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:43.094 00:30:43.094 real 0m45.993s 00:30:43.094 user 1m7.789s 00:30:43.094 sys 0m14.203s 00:30:43.094 14:35:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:43.094 14:35:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:43.094 ************************************ 00:30:43.094 END TEST nvmf_abort_qd_sizes 00:30:43.094 ************************************ 00:30:43.094 14:35:35 -- common/autotest_common.sh@1142 -- # return 0 00:30:43.094 14:35:35 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:43.094 14:35:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:43.094 14:35:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:43.094 14:35:35 -- common/autotest_common.sh@10 -- # set +x 00:30:43.353 ************************************ 00:30:43.353 START TEST keyring_file 00:30:43.353 ************************************ 00:30:43.353 14:35:35 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:43.353 * Looking for test storage... 00:30:43.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:43.353 14:35:35 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:43.353 14:35:35 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.353 14:35:35 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.353 14:35:35 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.353 14:35:35 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.353 14:35:35 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.353 14:35:35 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.353 14:35:35 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.353 14:35:35 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:43.353 14:35:35 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:43.353 14:35:35 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:43.353 14:35:35 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:43.353 14:35:35 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:43.353 14:35:35 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:43.353 14:35:35 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:43.353 14:35:35 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:43.353 14:35:35 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:43.353 14:35:35 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:43.353 14:35:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DBkmUGokzd 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:43.354 14:35:35 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:43.354 14:35:35 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:43.354 14:35:35 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:43.354 14:35:35 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:43.354 14:35:35 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:43.354 14:35:35 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DBkmUGokzd 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DBkmUGokzd 00:30:43.354 14:35:35 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DBkmUGokzd 00:30:43.354 14:35:35 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.j7GI7kjeh7 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:43.354 14:35:35 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:43.354 14:35:35 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:43.354 14:35:35 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:43.354 14:35:35 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:43.354 14:35:35 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:43.354 14:35:35 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.j7GI7kjeh7 00:30:43.354 14:35:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.j7GI7kjeh7 00:30:43.354 14:35:35 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.j7GI7kjeh7 00:30:43.354 14:35:35 keyring_file -- keyring/file.sh@30 -- # tgtpid=2744587 00:30:43.354 14:35:35 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:43.354 14:35:35 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2744587 00:30:43.354 14:35:35 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2744587 ']' 00:30:43.354 14:35:35 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.354 14:35:35 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:43.354 14:35:35 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.354 14:35:35 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:43.354 14:35:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:43.612 [2024-07-12 14:35:35.383644] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:30:43.612 [2024-07-12 14:35:35.383692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2744587 ] 00:30:43.612 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.612 [2024-07-12 14:35:35.437590] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.612 [2024-07-12 14:35:35.516845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.181 14:35:36 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:44.181 14:35:36 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:44.181 14:35:36 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:44.181 14:35:36 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.181 14:35:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:44.181 [2024-07-12 14:35:36.186626] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.441 null0 00:30:44.441 [2024-07-12 14:35:36.218678] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:44.441 [2024-07-12 14:35:36.218898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:44.441 [2024-07-12 14:35:36.226701] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.441 14:35:36 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:44.441 [2024-07-12 14:35:36.242751] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:44.441 request: 00:30:44.441 { 00:30:44.441 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:44.441 "secure_channel": false, 00:30:44.441 "listen_address": { 00:30:44.441 "trtype": "tcp", 00:30:44.441 "traddr": "127.0.0.1", 00:30:44.441 "trsvcid": "4420" 00:30:44.441 }, 00:30:44.441 "method": "nvmf_subsystem_add_listener", 00:30:44.441 "req_id": 1 00:30:44.441 } 00:30:44.441 Got JSON-RPC error response 00:30:44.441 response: 00:30:44.441 { 00:30:44.441 "code": -32602, 00:30:44.441 "message": "Invalid parameters" 00:30:44.441 } 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:44.441 14:35:36 keyring_file -- keyring/file.sh@46 -- # bperfpid=2744759 00:30:44.441 14:35:36 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2744759 /var/tmp/bperf.sock 00:30:44.441 14:35:36 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2744759 ']' 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:44.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:44.441 14:35:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:44.441 [2024-07-12 14:35:36.293971] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:30:44.441 [2024-07-12 14:35:36.294014] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2744759 ] 00:30:44.441 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.441 [2024-07-12 14:35:36.348085] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.441 [2024-07-12 14:35:36.423055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.377 14:35:37 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:45.377 14:35:37 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:45.377 14:35:37 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DBkmUGokzd 00:30:45.377 14:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DBkmUGokzd 00:30:45.377 14:35:37 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.j7GI7kjeh7 00:30:45.377 14:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.j7GI7kjeh7 00:30:45.635 14:35:37 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:45.635 14:35:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:45.635 14:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:45.635 14:35:37 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:45.635 14:35:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:45.635 14:35:37 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.DBkmUGokzd == \/\t\m\p\/\t\m\p\.\D\B\k\m\U\G\o\k\z\d ]] 00:30:45.635 14:35:37 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:45.635 14:35:37 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:45.635 14:35:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:45.635 14:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:45.635 14:35:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:45.895 14:35:37 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.j7GI7kjeh7 == \/\t\m\p\/\t\m\p\.\j\7\G\I\7\k\j\e\h\7 ]] 00:30:45.895 14:35:37 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:45.895 14:35:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:45.895 14:35:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:45.895 14:35:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:45.895 14:35:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:45.895 14:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:46.155 14:35:37 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:46.155 14:35:37 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:46.155 14:35:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:46.155 14:35:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:46.155 14:35:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:46.155 14:35:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:46.155 14:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:46.155 14:35:38 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:46.155 14:35:38 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:46.155 14:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:46.413 [2024-07-12 14:35:38.297400] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:46.413 nvme0n1 00:30:46.413 14:35:38 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:46.413 14:35:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:46.413 14:35:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:46.413 14:35:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:46.413 14:35:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:46.413 14:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:46.672 14:35:38 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:46.672 14:35:38 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:46.672 14:35:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:46.672 14:35:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:46.672 14:35:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:46.672 14:35:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:46.672 14:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:46.931 14:35:38 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:46.931 14:35:38 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:46.931 Running I/O for 1 seconds... 00:30:47.867 00:30:47.867 Latency(us) 00:30:47.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.867 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:47.867 nvme0n1 : 1.00 17093.49 66.77 0.00 0.00 7468.19 3789.69 11910.46 00:30:47.867 =================================================================================================================== 00:30:47.867 Total : 17093.49 66.77 0.00 0.00 7468.19 3789.69 11910.46 00:30:47.867 0 00:30:47.867 14:35:39 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:47.867 14:35:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:48.126 14:35:40 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:48.126 14:35:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:48.126 14:35:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:48.126 14:35:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:48.126 14:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:48.126 14:35:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:48.385 14:35:40 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:48.385 14:35:40 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:48.385 14:35:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:48.385 14:35:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:48.385 14:35:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:48.385 14:35:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:48.385 14:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:48.385 14:35:40 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:48.385 14:35:40 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:48.385 14:35:40 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:48.385 14:35:40 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:48.385 14:35:40 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:48.385 14:35:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:48.385 14:35:40 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:48.385 14:35:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:48.385 14:35:40 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:48.385 14:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:48.644 [2024-07-12 14:35:40.542157] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:48.644 [2024-07-12 14:35:40.543063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1770 (107): Transport endpoint is not connected 00:30:48.644 [2024-07-12 14:35:40.544058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1770 (9): Bad file descriptor 00:30:48.644 [2024-07-12 14:35:40.545058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:48.644 [2024-07-12 14:35:40.545067] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:48.644 [2024-07-12 14:35:40.545075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:48.644 request: 00:30:48.644 { 00:30:48.644 "name": "nvme0", 00:30:48.644 "trtype": "tcp", 00:30:48.644 "traddr": "127.0.0.1", 00:30:48.644 "adrfam": "ipv4", 00:30:48.644 "trsvcid": "4420", 00:30:48.644 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:48.644 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:48.644 "prchk_reftag": false, 00:30:48.644 "prchk_guard": false, 00:30:48.644 "hdgst": false, 00:30:48.644 "ddgst": false, 00:30:48.644 "psk": "key1", 00:30:48.644 "method": "bdev_nvme_attach_controller", 00:30:48.644 "req_id": 1 00:30:48.644 } 00:30:48.644 Got JSON-RPC error response 00:30:48.644 response: 00:30:48.644 { 00:30:48.644 "code": -5, 00:30:48.644 "message": "Input/output error" 00:30:48.644 } 00:30:48.644 14:35:40 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:48.644 14:35:40 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:48.644 14:35:40 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:48.644 14:35:40 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:48.644 14:35:40 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:48.644 14:35:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:48.644 14:35:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:48.644 14:35:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:48.644 14:35:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:48.644 14:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:48.903 14:35:40 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:48.903 14:35:40 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:48.903 14:35:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:48.903 14:35:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:48.903 14:35:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:48.903 14:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:48.903 14:35:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:49.162 14:35:40 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:49.162 14:35:40 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:49.162 14:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:49.162 14:35:41 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:49.162 14:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:49.420 14:35:41 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:49.420 14:35:41 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:49.420 14:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:49.679 14:35:41 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:49.679 14:35:41 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.DBkmUGokzd 00:30:49.679 14:35:41 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DBkmUGokzd 00:30:49.679 14:35:41 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:49.679 14:35:41 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DBkmUGokzd 00:30:49.679 14:35:41 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:49.679 14:35:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:49.679 14:35:41 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:49.679 14:35:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:49.679 14:35:41 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DBkmUGokzd 00:30:49.679 14:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DBkmUGokzd 00:30:49.679 [2024-07-12 14:35:41.605176] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DBkmUGokzd': 0100660 00:30:49.679 [2024-07-12 14:35:41.605201] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:49.679 request: 00:30:49.679 { 00:30:49.679 "name": "key0", 00:30:49.679 "path": "/tmp/tmp.DBkmUGokzd", 00:30:49.679 "method": "keyring_file_add_key", 00:30:49.679 "req_id": 1 00:30:49.679 } 00:30:49.679 Got JSON-RPC error response 00:30:49.679 response: 00:30:49.679 { 00:30:49.679 "code": -1, 00:30:49.679 "message": "Operation not permitted" 00:30:49.679 } 00:30:49.679 14:35:41 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:49.679 14:35:41 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:49.679 14:35:41 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:49.679 14:35:41 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:49.679 14:35:41 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.DBkmUGokzd 00:30:49.679 14:35:41 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DBkmUGokzd 00:30:49.679 14:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DBkmUGokzd 00:30:49.937 14:35:41 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.DBkmUGokzd 00:30:49.937 14:35:41 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:49.937 14:35:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:49.937 14:35:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:49.937 14:35:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:49.937 14:35:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:49.937 14:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:50.196 14:35:41 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:50.196 14:35:41 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:50.196 14:35:41 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:50.196 14:35:41 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:50.197 14:35:41 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:50.197 14:35:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:50.197 14:35:41 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:50.197 14:35:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:50.197 14:35:41 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:50.197 14:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:50.197 [2024-07-12 14:35:42.118541] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DBkmUGokzd': No such file or directory 00:30:50.197 [2024-07-12 14:35:42.118559] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:50.197 [2024-07-12 14:35:42.118579] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:50.197 [2024-07-12 14:35:42.118585] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:50.197 [2024-07-12 14:35:42.118590] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:50.197 request: 00:30:50.197 { 00:30:50.197 "name": "nvme0", 00:30:50.197 "trtype": "tcp", 00:30:50.197 "traddr": "127.0.0.1", 00:30:50.197 "adrfam": "ipv4", 00:30:50.197 "trsvcid": "4420", 00:30:50.197 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:50.197 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:50.197 "prchk_reftag": false, 00:30:50.197 "prchk_guard": false, 00:30:50.197 "hdgst": false, 00:30:50.197 "ddgst": false, 00:30:50.197 "psk": "key0", 00:30:50.197 "method": "bdev_nvme_attach_controller", 00:30:50.197 "req_id": 1 00:30:50.197 } 00:30:50.197 Got JSON-RPC error response 00:30:50.197 response: 00:30:50.197 { 00:30:50.197 "code": -19, 00:30:50.197 "message": "No such device" 00:30:50.197 } 00:30:50.197 14:35:42 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:50.197 14:35:42 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:50.197 14:35:42 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:50.197 14:35:42 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:50.197 14:35:42 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:50.197 14:35:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:50.456 14:35:42 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:50.456 14:35:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:50.456 14:35:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:50.456 14:35:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:50.456 14:35:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:50.456 14:35:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:50.456 14:35:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.srq5jaZ3xB 00:30:50.456 14:35:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:50.456 14:35:42 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:50.456 14:35:42 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:50.456 14:35:42 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:50.456 14:35:42 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:50.456 14:35:42 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:50.456 14:35:42 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:50.456 14:35:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.srq5jaZ3xB 00:30:50.456 14:35:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.srq5jaZ3xB 00:30:50.456 14:35:42 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.srq5jaZ3xB 00:30:50.456 14:35:42 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.srq5jaZ3xB 00:30:50.456 14:35:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.srq5jaZ3xB 00:30:50.715 14:35:42 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:50.715 14:35:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:50.973 nvme0n1 00:30:50.973 14:35:42 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:50.973 14:35:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:50.973 14:35:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:50.973 14:35:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:50.973 14:35:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:50.973 14:35:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:50.973 14:35:42 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:50.973 14:35:42 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:50.973 14:35:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:51.230 14:35:43 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:51.230 14:35:43 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:51.230 14:35:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:51.230 14:35:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:51.230 14:35:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:51.488 14:35:43 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:51.488 14:35:43 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:51.488 14:35:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:51.488 14:35:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:51.488 14:35:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:51.488 14:35:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:51.488 14:35:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:51.488 14:35:43 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:51.488 14:35:43 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:51.488 14:35:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:51.746 14:35:43 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:51.746 14:35:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:51.746 14:35:43 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:52.005 14:35:43 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:52.005 14:35:43 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.srq5jaZ3xB 00:30:52.005 14:35:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.srq5jaZ3xB 00:30:52.005 14:35:43 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.j7GI7kjeh7 00:30:52.005 14:35:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.j7GI7kjeh7 00:30:52.292 14:35:44 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:52.292 14:35:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:52.606 nvme0n1 00:30:52.606 14:35:44 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:52.606 14:35:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:52.606 14:35:44 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:52.606 "subsystems": [ 00:30:52.606 { 00:30:52.606 "subsystem": "keyring", 00:30:52.606 "config": [ 00:30:52.606 { 00:30:52.606 "method": "keyring_file_add_key", 00:30:52.606 "params": { 00:30:52.606 "name": "key0", 00:30:52.606 "path": "/tmp/tmp.srq5jaZ3xB" 00:30:52.606 } 00:30:52.606 }, 00:30:52.606 { 00:30:52.606 "method": "keyring_file_add_key", 00:30:52.606 "params": { 00:30:52.606 "name": "key1", 00:30:52.606 "path": "/tmp/tmp.j7GI7kjeh7" 00:30:52.606 } 00:30:52.606 } 00:30:52.606 ] 00:30:52.606 }, 00:30:52.606 { 00:30:52.606 "subsystem": "iobuf", 00:30:52.606 "config": [ 00:30:52.606 { 00:30:52.606 "method": "iobuf_set_options", 00:30:52.606 "params": { 00:30:52.606 "small_pool_count": 8192, 00:30:52.606 "large_pool_count": 1024, 00:30:52.606 "small_bufsize": 8192, 00:30:52.606 "large_bufsize": 135168 00:30:52.606 } 00:30:52.606 } 00:30:52.606 ] 00:30:52.606 }, 00:30:52.606 { 00:30:52.606 "subsystem": "sock", 00:30:52.606 "config": [ 00:30:52.606 { 00:30:52.606 "method": "sock_set_default_impl", 00:30:52.607 "params": { 00:30:52.607 "impl_name": "posix" 00:30:52.607 } 00:30:52.607 }, 00:30:52.607 { 00:30:52.607 "method": "sock_impl_set_options", 00:30:52.607 "params": { 00:30:52.607 "impl_name": "ssl", 00:30:52.607 "recv_buf_size": 4096, 00:30:52.607 "send_buf_size": 4096, 00:30:52.607 "enable_recv_pipe": true, 00:30:52.607 "enable_quickack": false, 00:30:52.607 "enable_placement_id": 0, 00:30:52.607 "enable_zerocopy_send_server": true, 00:30:52.607 "enable_zerocopy_send_client": false, 00:30:52.607 "zerocopy_threshold": 0, 00:30:52.607 "tls_version": 0, 00:30:52.607 "enable_ktls": false 00:30:52.607 } 00:30:52.607 }, 00:30:52.607 { 00:30:52.607 "method": "sock_impl_set_options", 00:30:52.607 "params": { 00:30:52.607 "impl_name": "posix", 00:30:52.607 "recv_buf_size": 2097152, 00:30:52.607 "send_buf_size": 2097152, 00:30:52.607 "enable_recv_pipe": true, 00:30:52.607 "enable_quickack": false, 00:30:52.607 "enable_placement_id": 0, 00:30:52.607 "enable_zerocopy_send_server": true, 00:30:52.607 "enable_zerocopy_send_client": false, 00:30:52.607 "zerocopy_threshold": 0, 00:30:52.607 "tls_version": 0, 00:30:52.607 "enable_ktls": false 00:30:52.607 } 00:30:52.607 } 00:30:52.607 ] 00:30:52.607 }, 00:30:52.607 { 00:30:52.607 "subsystem": "vmd", 00:30:52.607 "config": [] 00:30:52.607 }, 00:30:52.607 { 00:30:52.607 "subsystem": "accel", 00:30:52.607 "config": [ 00:30:52.607 { 00:30:52.607 "method": "accel_set_options", 00:30:52.607 "params": { 00:30:52.607 "small_cache_size": 128, 00:30:52.607 "large_cache_size": 16, 00:30:52.607 "task_count": 2048, 00:30:52.607 "sequence_count": 2048, 00:30:52.607 "buf_count": 2048 00:30:52.607 } 00:30:52.607 } 00:30:52.607 ] 00:30:52.607 }, 00:30:52.607 { 00:30:52.607 "subsystem": "bdev", 00:30:52.607 "config": [ 00:30:52.607 { 00:30:52.607 "method": "bdev_set_options", 00:30:52.607 "params": { 00:30:52.607 "bdev_io_pool_size": 65535, 00:30:52.607 "bdev_io_cache_size": 256, 00:30:52.607 "bdev_auto_examine": true, 00:30:52.607 "iobuf_small_cache_size": 128, 00:30:52.607 "iobuf_large_cache_size": 16 00:30:52.607 } 00:30:52.607 }, 00:30:52.607 { 00:30:52.607 "method": "bdev_raid_set_options", 00:30:52.607 "params": { 00:30:52.607 "process_window_size_kb": 1024 00:30:52.607 } 00:30:52.607 }, 00:30:52.607 { 00:30:52.607 "method": "bdev_iscsi_set_options", 00:30:52.607 "params": { 00:30:52.607 "timeout_sec": 30 00:30:52.607 } 00:30:52.607 }, 00:30:52.607 { 00:30:52.607 "method": "bdev_nvme_set_options", 00:30:52.607 "params": { 00:30:52.607 "action_on_timeout": "none", 00:30:52.607 "timeout_us": 0, 00:30:52.607 "timeout_admin_us": 0, 00:30:52.607 "keep_alive_timeout_ms": 10000, 00:30:52.607 "arbitration_burst": 0, 00:30:52.607 "low_priority_weight": 0, 00:30:52.607 "medium_priority_weight": 0, 00:30:52.607 "high_priority_weight": 0, 00:30:52.607 "nvme_adminq_poll_period_us": 10000, 00:30:52.607 "nvme_ioq_poll_period_us": 0, 00:30:52.607 "io_queue_requests": 512, 00:30:52.607 "delay_cmd_submit": true, 00:30:52.607 "transport_retry_count": 4, 00:30:52.607 "bdev_retry_count": 3, 00:30:52.607 "transport_ack_timeout": 0, 00:30:52.607 "ctrlr_loss_timeout_sec": 0, 00:30:52.607 "reconnect_delay_sec": 0, 00:30:52.607 "fast_io_fail_timeout_sec": 0, 00:30:52.607 "disable_auto_failback": false, 00:30:52.607 "generate_uuids": false, 00:30:52.607 "transport_tos": 0, 00:30:52.607 "nvme_error_stat": false, 00:30:52.607 "rdma_srq_size": 0, 00:30:52.607 "io_path_stat": false, 00:30:52.607 "allow_accel_sequence": false, 00:30:52.607 "rdma_max_cq_size": 0, 00:30:52.607 "rdma_cm_event_timeout_ms": 0, 00:30:52.607 "dhchap_digests": [ 00:30:52.607 "sha256", 00:30:52.607 "sha384", 00:30:52.607 "sha512" 00:30:52.607 ], 00:30:52.607 "dhchap_dhgroups": [ 00:30:52.607 "null", 00:30:52.607 "ffdhe2048", 00:30:52.607 "ffdhe3072", 00:30:52.607 "ffdhe4096", 00:30:52.607 "ffdhe6144", 00:30:52.607 "ffdhe8192" 00:30:52.607 ] 00:30:52.607 } 00:30:52.607 }, 00:30:52.607 { 00:30:52.607 "method": "bdev_nvme_attach_controller", 00:30:52.607 "params": { 00:30:52.607 "name": "nvme0", 00:30:52.607 "trtype": "TCP", 00:30:52.607 "adrfam": "IPv4", 00:30:52.607 "traddr": "127.0.0.1", 00:30:52.607 "trsvcid": "4420", 00:30:52.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.607 "prchk_reftag": false, 00:30:52.607 "prchk_guard": false, 00:30:52.607 "ctrlr_loss_timeout_sec": 0, 00:30:52.607 "reconnect_delay_sec": 0, 00:30:52.607 "fast_io_fail_timeout_sec": 0, 00:30:52.607 "psk": "key0", 00:30:52.607 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:52.607 "hdgst": false, 00:30:52.607 "ddgst": false 00:30:52.607 } 00:30:52.607 }, 00:30:52.607 { 00:30:52.607 "method": "bdev_nvme_set_hotplug", 00:30:52.607 "params": { 00:30:52.607 "period_us": 100000, 00:30:52.607 "enable": false 00:30:52.607 } 00:30:52.607 }, 00:30:52.607 { 00:30:52.607 "method": "bdev_wait_for_examine" 00:30:52.607 } 00:30:52.607 ] 00:30:52.607 }, 00:30:52.607 { 00:30:52.607 "subsystem": "nbd", 00:30:52.607 "config": [] 00:30:52.607 } 00:30:52.607 ] 00:30:52.607 }' 00:30:52.607 14:35:44 keyring_file -- keyring/file.sh@114 -- # killprocess 2744759 00:30:52.607 14:35:44 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2744759 ']' 00:30:52.607 14:35:44 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2744759 00:30:52.868 14:35:44 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:52.868 14:35:44 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:52.868 14:35:44 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2744759 00:30:52.868 14:35:44 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:52.868 14:35:44 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:52.868 14:35:44 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2744759' 00:30:52.868 killing process with pid 2744759 00:30:52.868 14:35:44 keyring_file -- common/autotest_common.sh@967 -- # kill 2744759 00:30:52.868 Received shutdown signal, test time was about 1.000000 seconds 00:30:52.868 00:30:52.868 Latency(us) 00:30:52.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.868 =================================================================================================================== 00:30:52.868 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:52.868 14:35:44 keyring_file -- common/autotest_common.sh@972 -- # wait 2744759 00:30:52.868 14:35:44 keyring_file -- keyring/file.sh@117 -- # bperfpid=2746283 00:30:52.868 14:35:44 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2746283 /var/tmp/bperf.sock 00:30:52.868 14:35:44 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2746283 ']' 00:30:52.868 14:35:44 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:52.868 14:35:44 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:52.868 14:35:44 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:52.868 14:35:44 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:52.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:52.868 14:35:44 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:52.868 "subsystems": [ 00:30:52.868 { 00:30:52.868 "subsystem": "keyring", 00:30:52.868 "config": [ 00:30:52.868 { 00:30:52.868 "method": "keyring_file_add_key", 00:30:52.868 "params": { 00:30:52.868 "name": "key0", 00:30:52.868 "path": "/tmp/tmp.srq5jaZ3xB" 00:30:52.868 } 00:30:52.868 }, 00:30:52.868 { 00:30:52.868 "method": "keyring_file_add_key", 00:30:52.868 "params": { 00:30:52.868 "name": "key1", 00:30:52.868 "path": "/tmp/tmp.j7GI7kjeh7" 00:30:52.868 } 00:30:52.868 } 00:30:52.868 ] 00:30:52.868 }, 00:30:52.868 { 00:30:52.868 "subsystem": "iobuf", 00:30:52.868 "config": [ 00:30:52.868 { 00:30:52.868 "method": "iobuf_set_options", 00:30:52.868 "params": { 00:30:52.868 "small_pool_count": 8192, 00:30:52.868 "large_pool_count": 1024, 00:30:52.868 "small_bufsize": 8192, 00:30:52.868 "large_bufsize": 135168 00:30:52.868 } 00:30:52.868 } 00:30:52.868 ] 00:30:52.868 }, 00:30:52.868 { 00:30:52.868 "subsystem": "sock", 00:30:52.868 "config": [ 00:30:52.868 { 00:30:52.868 "method": "sock_set_default_impl", 00:30:52.868 "params": { 00:30:52.868 "impl_name": "posix" 00:30:52.868 } 00:30:52.868 }, 00:30:52.868 { 00:30:52.868 "method": "sock_impl_set_options", 00:30:52.868 "params": { 00:30:52.868 "impl_name": "ssl", 00:30:52.868 "recv_buf_size": 4096, 00:30:52.868 "send_buf_size": 4096, 00:30:52.868 "enable_recv_pipe": true, 00:30:52.868 "enable_quickack": false, 00:30:52.868 "enable_placement_id": 0, 00:30:52.868 "enable_zerocopy_send_server": true, 00:30:52.868 "enable_zerocopy_send_client": false, 00:30:52.868 "zerocopy_threshold": 0, 00:30:52.868 "tls_version": 0, 00:30:52.868 "enable_ktls": false 00:30:52.868 } 00:30:52.868 }, 00:30:52.868 { 00:30:52.868 "method": "sock_impl_set_options", 00:30:52.868 "params": { 00:30:52.868 "impl_name": "posix", 00:30:52.868 "recv_buf_size": 2097152, 00:30:52.868 "send_buf_size": 2097152, 00:30:52.868 "enable_recv_pipe": true, 00:30:52.868 "enable_quickack": false, 00:30:52.868 "enable_placement_id": 0, 00:30:52.868 "enable_zerocopy_send_server": true, 00:30:52.868 "enable_zerocopy_send_client": false, 00:30:52.868 "zerocopy_threshold": 0, 00:30:52.868 "tls_version": 0, 00:30:52.868 "enable_ktls": false 00:30:52.868 } 00:30:52.868 } 00:30:52.868 ] 00:30:52.868 }, 00:30:52.868 { 00:30:52.868 "subsystem": "vmd", 00:30:52.868 "config": [] 00:30:52.868 }, 00:30:52.868 { 00:30:52.868 "subsystem": "accel", 00:30:52.868 "config": [ 00:30:52.868 { 00:30:52.868 "method": "accel_set_options", 00:30:52.868 "params": { 00:30:52.868 "small_cache_size": 128, 00:30:52.868 "large_cache_size": 16, 00:30:52.868 "task_count": 2048, 00:30:52.868 "sequence_count": 2048, 00:30:52.868 "buf_count": 2048 00:30:52.868 } 00:30:52.868 } 00:30:52.868 ] 00:30:52.868 }, 00:30:52.868 { 00:30:52.868 "subsystem": "bdev", 00:30:52.868 "config": [ 00:30:52.868 { 00:30:52.868 "method": "bdev_set_options", 00:30:52.868 "params": { 00:30:52.868 "bdev_io_pool_size": 65535, 00:30:52.868 "bdev_io_cache_size": 256, 00:30:52.868 "bdev_auto_examine": true, 00:30:52.868 "iobuf_small_cache_size": 128, 00:30:52.868 "iobuf_large_cache_size": 16 00:30:52.868 } 00:30:52.868 }, 00:30:52.868 { 00:30:52.868 "method": "bdev_raid_set_options", 00:30:52.868 "params": { 00:30:52.868 "process_window_size_kb": 1024 00:30:52.868 } 00:30:52.868 }, 00:30:52.868 { 00:30:52.868 "method": "bdev_iscsi_set_options", 00:30:52.868 "params": { 00:30:52.868 "timeout_sec": 30 00:30:52.868 } 00:30:52.868 }, 00:30:52.868 { 00:30:52.868 "method": "bdev_nvme_set_options", 00:30:52.868 "params": { 00:30:52.868 "action_on_timeout": "none", 00:30:52.868 "timeout_us": 0, 00:30:52.868 "timeout_admin_us": 0, 00:30:52.868 "keep_alive_timeout_ms": 10000, 00:30:52.868 "arbitration_burst": 0, 00:30:52.868 "low_priority_weight": 0, 00:30:52.868 "medium_priority_weight": 0, 00:30:52.868 "high_priority_weight": 0, 00:30:52.868 "nvme_adminq_poll_period_us": 10000, 00:30:52.868 "nvme_ioq_poll_period_us": 0, 00:30:52.868 "io_queue_requests": 512, 00:30:52.868 "delay_cmd_submit": true, 00:30:52.868 "transport_retry_count": 4, 00:30:52.868 "bdev_retry_count": 3, 00:30:52.868 "transport_ack_timeout": 0, 00:30:52.868 "ctrlr_loss_timeout_sec": 0, 00:30:52.868 "reconnect_delay_sec": 0, 00:30:52.868 "fast_io_fail_timeout_sec": 0, 00:30:52.868 "disable_auto_failback": false, 00:30:52.868 "generate_uuids": false, 00:30:52.868 "transport_tos": 0, 00:30:52.868 "nvme_error_stat": false, 00:30:52.868 "rdma_srq_size": 0, 00:30:52.868 "io_path_stat": false, 00:30:52.868 "allow_accel_sequence": false, 00:30:52.868 "rdma_max_cq_size": 0, 00:30:52.868 "rdma_cm_event_timeout_ms": 0, 00:30:52.868 "dhchap_digests": [ 00:30:52.868 "sha256", 00:30:52.868 "sha384", 00:30:52.868 "sha512" 00:30:52.868 ], 00:30:52.868 "dhchap_dhgroups": [ 00:30:52.868 "null", 00:30:52.868 "ffdhe2048", 00:30:52.868 "ffdhe3072", 00:30:52.868 "ffdhe4096", 00:30:52.868 "ffdhe6144", 00:30:52.868 "ffdhe8192" 00:30:52.868 ] 00:30:52.868 } 00:30:52.868 }, 00:30:52.868 { 00:30:52.868 "method": "bdev_nvme_attach_controller", 00:30:52.868 "params": { 00:30:52.868 "name": "nvme0", 00:30:52.868 "trtype": "TCP", 00:30:52.868 "adrfam": "IPv4", 00:30:52.868 "traddr": "127.0.0.1", 00:30:52.868 "trsvcid": "4420", 00:30:52.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.868 "prchk_reftag": false, 00:30:52.868 "prchk_guard": false, 00:30:52.868 "ctrlr_loss_timeout_sec": 0, 00:30:52.868 "reconnect_delay_sec": 0, 00:30:52.868 "fast_io_fail_timeout_sec": 0, 00:30:52.868 "psk": "key0", 00:30:52.868 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:52.868 "hdgst": false, 00:30:52.868 "ddgst": false 00:30:52.868 } 00:30:52.868 }, 00:30:52.868 { 00:30:52.868 "method": "bdev_nvme_set_hotplug", 00:30:52.868 "params": { 00:30:52.868 "period_us": 100000, 00:30:52.868 "enable": false 00:30:52.868 } 00:30:52.868 }, 00:30:52.868 { 00:30:52.868 "method": "bdev_wait_for_examine" 00:30:52.868 } 00:30:52.868 ] 00:30:52.868 }, 00:30:52.868 { 00:30:52.868 "subsystem": "nbd", 00:30:52.868 "config": [] 00:30:52.868 } 00:30:52.868 ] 00:30:52.868 }' 00:30:52.868 14:35:44 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:52.868 14:35:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:53.128 [2024-07-12 14:35:44.887605] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:30:53.128 [2024-07-12 14:35:44.887654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2746283 ] 00:30:53.128 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.128 [2024-07-12 14:35:44.942272] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.128 [2024-07-12 14:35:45.013048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.386 [2024-07-12 14:35:45.172638] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:53.953 14:35:45 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:53.953 14:35:45 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:53.953 14:35:45 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:53.953 14:35:45 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:53.953 14:35:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.953 14:35:45 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:53.953 14:35:45 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:53.953 14:35:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:53.953 14:35:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:53.953 14:35:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.953 14:35:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:53.953 14:35:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.212 14:35:46 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:54.212 14:35:46 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:54.212 14:35:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:54.212 14:35:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:54.212 14:35:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:54.212 14:35:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:54.212 14:35:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.470 14:35:46 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:54.470 14:35:46 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:54.470 14:35:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:54.470 14:35:46 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:54.470 14:35:46 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:54.470 14:35:46 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:54.470 14:35:46 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.srq5jaZ3xB /tmp/tmp.j7GI7kjeh7 00:30:54.470 14:35:46 keyring_file -- keyring/file.sh@20 -- # killprocess 2746283 00:30:54.470 14:35:46 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2746283 ']' 00:30:54.470 14:35:46 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2746283 00:30:54.470 14:35:46 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:54.470 14:35:46 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:54.470 14:35:46 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2746283 00:30:54.470 14:35:46 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:54.470 14:35:46 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:54.470 14:35:46 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2746283' 00:30:54.470 killing process with pid 2746283 00:30:54.470 14:35:46 keyring_file -- common/autotest_common.sh@967 -- # kill 2746283 00:30:54.470 Received shutdown signal, test time was about 1.000000 seconds 00:30:54.470 00:30:54.470 Latency(us) 00:30:54.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.470 =================================================================================================================== 00:30:54.470 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:54.470 14:35:46 keyring_file -- common/autotest_common.sh@972 -- # wait 2746283 00:30:54.729 14:35:46 keyring_file -- keyring/file.sh@21 -- # killprocess 2744587 00:30:54.729 14:35:46 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2744587 ']' 00:30:54.729 14:35:46 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2744587 00:30:54.729 14:35:46 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:54.729 14:35:46 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:54.729 14:35:46 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2744587 00:30:54.729 14:35:46 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:54.729 14:35:46 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:54.729 14:35:46 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2744587' 00:30:54.729 killing process with pid 2744587 00:30:54.729 14:35:46 keyring_file -- common/autotest_common.sh@967 -- # kill 2744587 00:30:54.729 [2024-07-12 14:35:46.668576] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:54.729 14:35:46 keyring_file -- common/autotest_common.sh@972 -- # wait 2744587 00:30:54.988 00:30:54.988 real 0m11.845s 00:30:54.988 user 0m28.374s 00:30:54.988 sys 0m2.586s 00:30:54.988 14:35:46 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:54.988 14:35:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:54.988 ************************************ 00:30:54.988 END TEST keyring_file 00:30:54.988 ************************************ 00:30:55.248 14:35:47 -- common/autotest_common.sh@1142 -- # return 0 00:30:55.248 14:35:47 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:30:55.248 14:35:47 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:55.248 14:35:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:55.248 14:35:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.248 14:35:47 -- common/autotest_common.sh@10 -- # set +x 00:30:55.248 ************************************ 00:30:55.248 START TEST keyring_linux 00:30:55.248 ************************************ 00:30:55.248 14:35:47 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:55.248 * Looking for test storage... 00:30:55.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:55.248 14:35:47 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:55.248 14:35:47 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.248 14:35:47 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.248 14:35:47 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.248 14:35:47 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.248 14:35:47 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.248 14:35:47 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.248 14:35:47 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.248 14:35:47 keyring_linux -- paths/export.sh@5 -- # export PATH 00:30:55.248 14:35:47 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:55.248 14:35:47 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:55.248 14:35:47 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:55.248 14:35:47 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:55.248 14:35:47 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:30:55.248 14:35:47 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:30:55.248 14:35:47 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:30:55.248 14:35:47 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:30:55.248 14:35:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:55.248 14:35:47 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:30:55.248 14:35:47 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:55.248 14:35:47 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:55.248 14:35:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:30:55.248 14:35:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:55.248 14:35:47 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:55.248 14:35:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:30:55.248 14:35:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:30:55.248 /tmp/:spdk-test:key0 00:30:55.248 14:35:47 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:30:55.248 14:35:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:55.248 14:35:47 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:30:55.248 14:35:47 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:55.248 14:35:47 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:55.248 14:35:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:30:55.248 14:35:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:55.249 14:35:47 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:55.249 14:35:47 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:55.249 14:35:47 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:55.249 14:35:47 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:55.249 14:35:47 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:55.249 14:35:47 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:55.249 14:35:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:30:55.249 14:35:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:30:55.249 /tmp/:spdk-test:key1 00:30:55.249 14:35:47 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2746827 00:30:55.249 14:35:47 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2746827 00:30:55.249 14:35:47 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:55.249 14:35:47 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2746827 ']' 00:30:55.249 14:35:47 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.249 14:35:47 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:55.249 14:35:47 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.249 14:35:47 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:55.249 14:35:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:55.508 [2024-07-12 14:35:47.288841] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:30:55.508 [2024-07-12 14:35:47.288886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2746827 ] 00:30:55.508 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.508 [2024-07-12 14:35:47.340182] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.508 [2024-07-12 14:35:47.411786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.076 14:35:48 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:56.076 14:35:48 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:56.076 14:35:48 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:30:56.076 14:35:48 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.076 14:35:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:56.076 [2024-07-12 14:35:48.082027] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:56.335 null0 00:30:56.335 [2024-07-12 14:35:48.114073] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:56.335 [2024-07-12 14:35:48.114388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:56.335 14:35:48 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.335 14:35:48 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:30:56.335 84473532 00:30:56.335 14:35:48 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:30:56.335 659113606 00:30:56.335 14:35:48 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2746844 00:30:56.335 14:35:48 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2746844 /var/tmp/bperf.sock 00:30:56.335 14:35:48 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:30:56.335 14:35:48 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2746844 ']' 00:30:56.335 14:35:48 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:56.335 14:35:48 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:56.335 14:35:48 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:56.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:56.335 14:35:48 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:56.335 14:35:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:56.335 [2024-07-12 14:35:48.183375] Starting SPDK v24.09-pre git sha1 192cfc373 / DPDK 24.03.0 initialization... 00:30:56.335 [2024-07-12 14:35:48.183423] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2746844 ] 00:30:56.335 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.335 [2024-07-12 14:35:48.238533] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.335 [2024-07-12 14:35:48.318164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.271 14:35:48 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:57.271 14:35:48 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:57.271 14:35:48 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:30:57.271 14:35:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:30:57.271 14:35:49 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:30:57.271 14:35:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:57.530 14:35:49 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:57.530 14:35:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:57.788 [2024-07-12 14:35:49.554685] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:57.788 nvme0n1 00:30:57.788 14:35:49 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:30:57.788 14:35:49 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:30:57.788 14:35:49 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:57.789 14:35:49 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:57.789 14:35:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.789 14:35:49 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:58.047 14:35:49 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:30:58.047 14:35:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:58.047 14:35:49 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:30:58.047 14:35:49 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:30:58.047 14:35:49 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:58.047 14:35:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.047 14:35:49 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:30:58.047 14:35:49 keyring_linux -- keyring/linux.sh@25 -- # sn=84473532 00:30:58.047 14:35:49 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:30:58.047 14:35:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:58.047 14:35:49 keyring_linux -- keyring/linux.sh@26 -- # [[ 84473532 == \8\4\4\7\3\5\3\2 ]] 00:30:58.047 14:35:49 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 84473532 00:30:58.047 14:35:50 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:30:58.047 14:35:50 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:58.307 Running I/O for 1 seconds... 00:30:59.242 00:30:59.242 Latency(us) 00:30:59.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.242 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:59.242 nvme0n1 : 1.01 17865.28 69.79 0.00 0.00 7134.62 5727.28 15614.66 00:30:59.242 =================================================================================================================== 00:30:59.242 Total : 17865.28 69.79 0.00 0.00 7134.62 5727.28 15614.66 00:30:59.242 0 00:30:59.242 14:35:51 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:59.242 14:35:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:59.501 14:35:51 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:30:59.501 14:35:51 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:30:59.501 14:35:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:59.501 14:35:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:59.501 14:35:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:59.501 14:35:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:59.501 14:35:51 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:30:59.501 14:35:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:59.501 14:35:51 keyring_linux -- keyring/linux.sh@23 -- # return 00:30:59.501 14:35:51 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:59.501 14:35:51 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:30:59.501 14:35:51 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:59.501 14:35:51 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:59.501 14:35:51 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.501 14:35:51 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:59.501 14:35:51 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.501 14:35:51 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:59.501 14:35:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:59.760 [2024-07-12 14:35:51.645161] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:59.761 [2024-07-12 14:35:51.645900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb9fd0 (107): Transport endpoint is not connected 00:30:59.761 [2024-07-12 14:35:51.646893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb9fd0 (9): Bad file descriptor 00:30:59.761 [2024-07-12 14:35:51.647894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:59.761 [2024-07-12 14:35:51.647904] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:59.761 [2024-07-12 14:35:51.647911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:59.761 request: 00:30:59.761 { 00:30:59.761 "name": "nvme0", 00:30:59.761 "trtype": "tcp", 00:30:59.761 "traddr": "127.0.0.1", 00:30:59.761 "adrfam": "ipv4", 00:30:59.761 "trsvcid": "4420", 00:30:59.761 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:59.761 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:59.761 "prchk_reftag": false, 00:30:59.761 "prchk_guard": false, 00:30:59.761 "hdgst": false, 00:30:59.761 "ddgst": false, 00:30:59.761 "psk": ":spdk-test:key1", 00:30:59.761 "method": "bdev_nvme_attach_controller", 00:30:59.761 "req_id": 1 00:30:59.761 } 00:30:59.761 Got JSON-RPC error response 00:30:59.761 response: 00:30:59.761 { 00:30:59.761 "code": -5, 00:30:59.761 "message": "Input/output error" 00:30:59.761 } 00:30:59.761 14:35:51 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:30:59.761 14:35:51 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:59.761 14:35:51 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:59.761 14:35:51 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:59.761 14:35:51 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:30:59.761 14:35:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:59.761 14:35:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:30:59.761 14:35:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:30:59.761 14:35:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:30:59.761 14:35:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:59.761 14:35:51 keyring_linux -- keyring/linux.sh@33 -- # sn=84473532 00:30:59.761 14:35:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 84473532 00:30:59.761 1 links removed 00:30:59.761 14:35:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:59.761 14:35:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:30:59.761 14:35:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:30:59.761 14:35:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:30:59.761 14:35:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:30:59.761 14:35:51 keyring_linux -- keyring/linux.sh@33 -- # sn=659113606 00:30:59.761 14:35:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 659113606 00:30:59.761 1 links removed 00:30:59.761 14:35:51 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2746844 00:30:59.761 14:35:51 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2746844 ']' 00:30:59.761 14:35:51 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2746844 00:30:59.761 14:35:51 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:59.761 14:35:51 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:59.761 14:35:51 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2746844 00:30:59.761 14:35:51 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:59.761 14:35:51 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:59.761 14:35:51 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2746844' 00:30:59.761 killing process with pid 2746844 00:30:59.761 14:35:51 keyring_linux -- common/autotest_common.sh@967 -- # kill 2746844 00:30:59.761 Received shutdown signal, test time was about 1.000000 seconds 00:30:59.761 00:30:59.761 Latency(us) 00:30:59.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.761 =================================================================================================================== 00:30:59.761 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:59.761 14:35:51 keyring_linux -- common/autotest_common.sh@972 -- # wait 2746844 00:31:00.020 14:35:51 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2746827 00:31:00.020 14:35:51 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2746827 ']' 00:31:00.020 14:35:51 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2746827 00:31:00.020 14:35:51 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:00.020 14:35:51 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:00.020 14:35:51 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2746827 00:31:00.020 14:35:51 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:00.020 14:35:51 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:00.020 14:35:51 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2746827' 00:31:00.020 killing process with pid 2746827 00:31:00.020 14:35:51 keyring_linux -- common/autotest_common.sh@967 -- # kill 2746827 00:31:00.020 14:35:51 keyring_linux -- common/autotest_common.sh@972 -- # wait 2746827 00:31:00.279 00:31:00.279 real 0m5.219s 00:31:00.279 user 0m9.461s 00:31:00.279 sys 0m1.498s 00:31:00.279 14:35:52 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:00.279 14:35:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:00.279 ************************************ 00:31:00.279 END TEST keyring_linux 00:31:00.279 ************************************ 00:31:00.539 14:35:52 -- common/autotest_common.sh@1142 -- # return 0 00:31:00.539 14:35:52 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:00.539 14:35:52 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:00.539 14:35:52 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:00.539 14:35:52 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:31:00.539 14:35:52 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:31:00.539 14:35:52 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:00.539 14:35:52 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:00.539 14:35:52 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:00.539 14:35:52 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:00.539 14:35:52 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:00.539 14:35:52 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:00.539 14:35:52 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:00.539 14:35:52 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:00.539 14:35:52 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:00.539 14:35:52 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:00.539 14:35:52 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:31:00.539 14:35:52 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:31:00.539 14:35:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:00.539 14:35:52 -- common/autotest_common.sh@10 -- # set +x 00:31:00.539 14:35:52 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:31:00.539 14:35:52 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:00.539 14:35:52 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:00.539 14:35:52 -- common/autotest_common.sh@10 -- # set +x 00:31:05.809 INFO: APP EXITING 00:31:05.809 INFO: killing all VMs 00:31:05.809 INFO: killing vhost app 00:31:05.809 INFO: EXIT DONE 00:31:07.712 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:31:07.712 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:31:07.712 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:31:07.712 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:31:07.712 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:31:07.712 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:31:07.712 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:31:07.712 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:31:07.712 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:31:07.712 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:31:07.712 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:31:07.712 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:31:07.712 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:31:07.712 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:31:07.712 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:31:07.712 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:31:07.712 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:31:10.247 Cleaning 00:31:10.247 Removing: /var/run/dpdk/spdk0/config 00:31:10.247 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:10.247 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:10.247 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:10.247 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:10.247 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:31:10.247 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:31:10.247 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:31:10.247 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:31:10.247 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:10.247 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:10.247 Removing: /var/run/dpdk/spdk1/config 00:31:10.247 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:10.247 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:10.247 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:10.247 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:10.247 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:31:10.247 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:31:10.247 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:31:10.247 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:31:10.247 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:10.247 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:10.247 Removing: /var/run/dpdk/spdk1/mp_socket 00:31:10.247 Removing: /var/run/dpdk/spdk2/config 00:31:10.247 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:10.247 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:10.247 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:10.247 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:10.247 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:31:10.247 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:31:10.247 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:31:10.247 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:31:10.247 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:10.247 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:10.247 Removing: /var/run/dpdk/spdk3/config 00:31:10.247 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:10.247 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:10.247 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:10.247 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:10.247 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:31:10.247 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:31:10.247 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:31:10.247 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:31:10.247 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:10.247 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:10.247 Removing: /var/run/dpdk/spdk4/config 00:31:10.247 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:10.247 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:10.247 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:10.247 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:10.247 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:31:10.247 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:31:10.247 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:31:10.247 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:31:10.247 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:10.247 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:10.247 Removing: /dev/shm/bdev_svc_trace.1 00:31:10.247 Removing: /dev/shm/nvmf_trace.0 00:31:10.247 Removing: /dev/shm/spdk_tgt_trace.pid2362833 00:31:10.247 Removing: /var/run/dpdk/spdk0 00:31:10.247 Removing: /var/run/dpdk/spdk1 00:31:10.247 Removing: /var/run/dpdk/spdk2 00:31:10.247 Removing: /var/run/dpdk/spdk3 00:31:10.507 Removing: /var/run/dpdk/spdk4 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2360581 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2361762 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2362833 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2363464 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2364412 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2364650 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2365621 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2365834 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2365981 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2367566 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2368871 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2369253 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2369535 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2369839 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2370130 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2370382 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2370635 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2370907 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2371649 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2374435 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2374897 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2375159 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2375189 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2375666 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2375897 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2376289 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2376401 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2376663 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2376897 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2377153 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2377167 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2377720 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2377971 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2378258 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2378532 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2378553 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2378835 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2379083 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2379329 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2379582 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2379830 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2380079 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2380330 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2380579 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2380825 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2381082 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2381327 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2381579 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2381829 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2382076 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2382329 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2382575 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2382822 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2383076 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2383359 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2383686 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2383958 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2384032 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2384499 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2388526 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2431488 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2435731 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2446298 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2451690 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2455661 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2456158 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2462233 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2468378 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2468386 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2469188 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2470005 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2470921 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2471500 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2471610 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2471840 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2471854 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2471866 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2472771 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2473698 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2474621 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2475089 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2475109 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2475448 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2476577 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2477759 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2486604 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2486856 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2491105 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2496746 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2499545 00:31:10.507 Removing: /var/run/dpdk/spdk_pid2509732 00:31:10.508 Removing: /var/run/dpdk/spdk_pid2518608 00:31:10.508 Removing: /var/run/dpdk/spdk_pid2520223 00:31:10.508 Removing: /var/run/dpdk/spdk_pid2521185 00:31:10.508 Removing: /var/run/dpdk/spdk_pid2538255 00:31:10.508 Removing: /var/run/dpdk/spdk_pid2542030 00:31:10.508 Removing: /var/run/dpdk/spdk_pid2567113 00:31:10.508 Removing: /var/run/dpdk/spdk_pid2571912 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2573673 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2575560 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2575723 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2575879 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2576060 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2576788 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2578517 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2579487 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2579892 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2582177 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2582733 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2583461 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2587500 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2597438 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2601476 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2607456 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2608754 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2610428 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2615106 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2619149 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2626712 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2626714 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2631255 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2631434 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2631659 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2632118 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2632126 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2636599 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2637162 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2641476 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2644126 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2649636 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2654974 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2664031 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2671024 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2671051 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2688832 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2689526 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2690222 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2690739 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2691671 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2692372 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2692894 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2693552 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2697805 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2698143 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2704111 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2704387 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2706612 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2714676 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2714830 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2719767 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2721627 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2723598 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2724780 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2726828 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2727892 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2736615 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2737083 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2737723 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2739987 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2740472 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2740940 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2744587 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2744759 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2746283 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2746827 00:31:10.767 Removing: /var/run/dpdk/spdk_pid2746844 00:31:10.767 Clean 00:31:10.767 14:36:02 -- common/autotest_common.sh@1451 -- # return 0 00:31:10.767 14:36:02 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:31:10.767 14:36:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:10.767 14:36:02 -- common/autotest_common.sh@10 -- # set +x 00:31:11.026 14:36:02 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:31:11.026 14:36:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:11.026 14:36:02 -- common/autotest_common.sh@10 -- # set +x 00:31:11.026 14:36:02 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:11.027 14:36:02 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:31:11.027 14:36:02 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:31:11.027 14:36:02 -- spdk/autotest.sh@391 -- # hash lcov 00:31:11.027 14:36:02 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:11.027 14:36:02 -- spdk/autotest.sh@393 -- # hostname 00:31:11.027 14:36:02 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:31:11.027 geninfo: WARNING: invalid characters removed from testname! 00:31:32.958 14:36:22 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:33.897 14:36:25 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:35.871 14:36:27 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:37.777 14:36:29 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:39.153 14:36:31 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:41.054 14:36:32 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:42.957 14:36:34 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:42.957 14:36:34 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:42.957 14:36:34 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:42.957 14:36:34 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.957 14:36:34 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.957 14:36:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.957 14:36:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.957 14:36:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.957 14:36:34 -- paths/export.sh@5 -- $ export PATH 00:31:42.957 14:36:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.957 14:36:34 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:42.957 14:36:34 -- common/autobuild_common.sh@444 -- $ date +%s 00:31:42.957 14:36:34 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720787794.XXXXXX 00:31:42.957 14:36:34 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720787794.uCIOpL 00:31:42.957 14:36:34 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:31:42.957 14:36:34 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:31:42.957 14:36:34 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:31:42.958 14:36:34 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:42.958 14:36:34 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:42.958 14:36:34 -- common/autobuild_common.sh@460 -- $ get_config_params 00:31:42.958 14:36:34 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:31:42.958 14:36:34 -- common/autotest_common.sh@10 -- $ set +x 00:31:42.958 14:36:34 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:31:42.958 14:36:34 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:31:42.958 14:36:34 -- pm/common@17 -- $ local monitor 00:31:42.958 14:36:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:42.958 14:36:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:42.958 14:36:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:42.958 14:36:34 -- pm/common@21 -- $ date +%s 00:31:42.958 14:36:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:42.958 14:36:34 -- pm/common@21 -- $ date +%s 00:31:42.958 14:36:34 -- pm/common@25 -- $ sleep 1 00:31:42.958 14:36:34 -- pm/common@21 -- $ date +%s 00:31:42.958 14:36:34 -- pm/common@21 -- $ date +%s 00:31:42.958 14:36:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720787794 00:31:42.958 14:36:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720787794 00:31:42.958 14:36:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720787794 00:31:42.958 14:36:34 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720787794 00:31:42.958 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720787794_collect-vmstat.pm.log 00:31:42.958 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720787794_collect-cpu-load.pm.log 00:31:42.958 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720787794_collect-cpu-temp.pm.log 00:31:42.958 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720787794_collect-bmc-pm.bmc.pm.log 00:31:43.896 14:36:35 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:31:43.896 14:36:35 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:31:43.896 14:36:35 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:43.896 14:36:35 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:43.896 14:36:35 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:43.896 14:36:35 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:43.896 14:36:35 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:43.896 14:36:35 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:43.896 14:36:35 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:43.896 14:36:35 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:43.896 14:36:35 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:43.896 14:36:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:43.896 14:36:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:43.896 14:36:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:43.896 14:36:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:43.896 14:36:35 -- pm/common@44 -- $ pid=2757543 00:31:43.896 14:36:35 -- pm/common@50 -- $ kill -TERM 2757543 00:31:43.896 14:36:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:43.896 14:36:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:43.896 14:36:35 -- pm/common@44 -- $ pid=2757545 00:31:43.896 14:36:35 -- pm/common@50 -- $ kill -TERM 2757545 00:31:43.896 14:36:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:43.896 14:36:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:43.896 14:36:35 -- pm/common@44 -- $ pid=2757546 00:31:43.896 14:36:35 -- pm/common@50 -- $ kill -TERM 2757546 00:31:43.896 14:36:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:43.896 14:36:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:43.896 14:36:35 -- pm/common@44 -- $ pid=2757569 00:31:43.896 14:36:35 -- pm/common@50 -- $ sudo -E kill -TERM 2757569 00:31:43.896 + [[ -n 2256945 ]] 00:31:43.896 + sudo kill 2256945 00:31:43.906 [Pipeline] } 00:31:43.924 [Pipeline] // stage 00:31:43.930 [Pipeline] } 00:31:43.948 [Pipeline] // timeout 00:31:43.954 [Pipeline] } 00:31:43.973 [Pipeline] // catchError 00:31:43.979 [Pipeline] } 00:31:43.997 [Pipeline] // wrap 00:31:44.003 [Pipeline] } 00:31:44.021 [Pipeline] // catchError 00:31:44.029 [Pipeline] stage 00:31:44.031 [Pipeline] { (Epilogue) 00:31:44.046 [Pipeline] catchError 00:31:44.048 [Pipeline] { 00:31:44.061 [Pipeline] echo 00:31:44.063 Cleanup processes 00:31:44.068 [Pipeline] sh 00:31:44.350 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:44.350 2757666 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:31:44.350 2757942 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:44.364 [Pipeline] sh 00:31:44.648 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:44.648 ++ grep -v 'sudo pgrep' 00:31:44.648 ++ awk '{print $1}' 00:31:44.648 + sudo kill -9 2757666 00:31:44.661 [Pipeline] sh 00:31:44.946 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:54.932 [Pipeline] sh 00:31:55.216 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:55.216 Artifacts sizes are good 00:31:55.230 [Pipeline] archiveArtifacts 00:31:55.237 Archiving artifacts 00:31:55.408 [Pipeline] sh 00:31:55.721 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:31:55.736 [Pipeline] cleanWs 00:31:55.746 [WS-CLEANUP] Deleting project workspace... 00:31:55.746 [WS-CLEANUP] Deferred wipeout is used... 00:31:55.752 [WS-CLEANUP] done 00:31:55.753 [Pipeline] } 00:31:55.770 [Pipeline] // catchError 00:31:55.783 [Pipeline] sh 00:31:56.065 + logger -p user.info -t JENKINS-CI 00:31:56.074 [Pipeline] } 00:31:56.093 [Pipeline] // stage 00:31:56.098 [Pipeline] } 00:31:56.115 [Pipeline] // node 00:31:56.121 [Pipeline] End of Pipeline 00:31:56.158 Finished: SUCCESS